url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://codeforces.com/problemset/problem/429/A
A. Xor-tree time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output Iahub is very proud of his recent discovery, propagating trees. Right now, he invented a new tree, called xor-tree. After this new revolutionary discovery, he invented a game for kids which uses xor-trees. The game is played on a tree having n nodes, numbered from 1 to n. Each node i has an initial value initi, which is either 0 or 1. The root of the tree is node 1. One can perform several (possibly, zero) operations on the tree during the game. The only available type of operation is to pick a node x. Right after someone has picked node x, the value of node x flips, the values of sons of x remain the same, the values of sons of sons of x flips, the values of sons of sons of sons of x remain the same and so on. The goal of the game is to get each node i to have value goali, which can also be only 0 or 1. You need to reach the goal of the game by using minimum number of operations. Input The first line contains an integer n (1 ≤ n ≤ 105). Each of the next n - 1 lines contains two integers ui and vi (1 ≤ ui, vi ≤ n; ui ≠ vi) meaning there is an edge between nodes ui and vi. The next line contains n integer numbers, the i-th of them corresponds to initi (initi is either 0 or 1). The following line also contains n integer numbers, the i-th number corresponds to goali (goali is either 0 or 1). Output In the first line output an integer number cnt, representing the minimal number of operations you perform. Each of the next cnt lines should contain an integer xi, representing that you pick a node xi. Examples Input 102 13 14 25 16 27 58 69 810 51 0 1 1 0 1 0 1 0 11 0 1 0 0 1 1 1 0 1 Output 247
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31227484345436096, "perplexity": 533.9963974743999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00391.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=25559
Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Mantle or hotplace & stirrer for first heating device? Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues Author: Subject: Mantle or hotplace & stirrer for first heating device? Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood Mantle or hotplace & stirrer for first heating device? I'm in the process of building up a lab and was wondering what people think would be better for a first purchase? I was originally going to get a mantle for distillation but then thought that a hot plate would be a better option since I could use a water/oil/sand bath for round bottom flasks, plus I have the flat surface. I figure for first equipment, versatility is more important, especially when on a limited budget. There was a Thermolyne Cimarec 2 I was thinking of picking up. Any opinions on that model? Metacelsus International Hazard Posts: 2502 Registered: 26-12-2012 Location: Boston, MA Member Is Offline Mood: Double, double, toil and trouble A hotplate with a built-in magnetic stirrer is pricey, but nice to have. I recently got one after the insulation on my cheap, crappy "electric stove" type hotplate cracked and it started electrocuting me. However, if you're on a budget, I would recommend a cheap one, like what I used to have, from Target or whatever store sells them. They go as low as $12.99: http://www.target.com/p/kitchen-selectives-single-burner-sb1... I would definitely recommend a hotplate over a mantle; as you said, you can always use a bath. As below, so above. Lambda-Eyde International Hazard Posts: 856 Registered: 20-11-2008 Location: Norway Member Is Offline Mood: Cleaved Definitely go for a hotplate stirrer; they can be used with beakers, erlenmeyers as well as various heating baths for accomodating RBFs and oddly-shaped containers. Heating mantles are a luxury and something you should only consider when you have a better equipped lab and feel that you use RBFs often enough to justify it. The stirring feature is also something that will make your life much more pleasant. (I find it kind of ironic that I own 6 heating mantles and a magnetic stirrer without heating - haven't found the money to buy a hotplate stirrer yet) I can't vouch for that particular make or model, but Corning and IKA make great stirrer hot plates. To afford them on a mortal's budget you'll have to buy on the used market. Steer away from the Chinese stuff on eBay. This just in: 95,5 % of the world population lives outside the USA You should really listen to ABBA Please drop by our IRC channel: #sciencemadness @ irc.efnet.org Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood Quote: Originally posted by Cheddite Cheese A hotplate with a built-in magnetic stirrer is pricey, but nice to have. I recently got one after the insulation on my cheap, crappy "electric stove" type hotplate cracked and it started electrocuting me. However, if you're on a budget, I would recommend a cheap one, like what I used to have, from Target or whatever store sells them. They go as low as$12.99: http://www.target.com/p/kitchen-selectives-single-burner-sb1... I would definitely recommend a hotplate over a mantle; as you said, you can always use a bath. I'm not sure what you mean by pricy, but my budget is $100 or so. If you mean some of the high end ones brand new, yeah I can't believe how much some are. But there's two in particular of the same model on ebay now that end tomorrow and the current high bid is$100 so I think I'm leaning towards one of those. I've seen some videos on Youtube of people using a round bottom flask directly on a hot plate, is it just me or is that just asking to create a serious hot spot and break your flask? [Edited on 10-8-2013 by Thanatops1s] ElectroWin Hazard to Others Posts: 224 Registered: 5-3-2011 Member Is Offline Mood: No Mood when heating stuff for chemistry it is really desirable to be able to regulate temperature to within narrow ranges; so hot-plate with sensor and closed-loop control goes a long way. i am lusting for an annealing oven, for doing ceramics work, that lets me program in a sequence of target temperatures and soak-times Lambda-Eyde International Hazard Posts: 856 Registered: 20-11-2008 Location: Norway Member Is Offline Mood: Cleaved Quote: Originally posted by Thanatops1s I've seen some videos on Youtube of people using a round bottom flask directly on a hot plate, is it just me or is that just asking to create a serious hot spot and break your flask? Unless we're talking very high temperatures I think that's highly unlikely. I've done it myself for my larger flasks for which I've not had a suitable mantle, together with a "skirt" of aluminium foil to reduce convection. The only problem with that technique I'd say is that you're not heating the contents uniformly and that it wastes time and electricity, which can be alleviated to an extent with the foil skirt. Edit: also, it is harder to maintain accurate and stable temperatures without a bath. [Edited on 10-8-2013 by Lambda-Eyde] This just in: 95,5 % of the world population lives outside the USA You should really listen to ABBA smaerd International Hazard Posts: 1262 Registered: 23-1-2010 Member Is Offline Mood: hmm... I chose a heating mantel and a stir plate. I never regretted it. I mostly have an interest in organic chemistry though so for the year or so I went without a hot-plate I had plenty of fun with round bottoms and room temperature/ice-bathed erlenmeyers. Depends on what you're interested in I guess. Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood Quote: Originally posted by smaerd Depends on what you're interested in I guess. Well I'm pretty much a beginner. I've been reading about chemistry for years and am just recently getting into actually doing things instead of just reading about them. So I guess right at this moment versatility is my main goal. Looks like the hot plate/stirrer is the way to go. Acidum Harmless Posts: 39 Registered: 2-5-2013 Location: Serbia Member Is Offline Mood: Sublimed Mostly we used heating mantle for distillations of greater amounts of organic solvents (1-5 l), and hotplate magnetic stirrer for everything else, from reactions to distillations of lesser amounts of chemicals. Versatility of hotplate magnetic stirrer is just unsurpassed... We mostly used IKA, excellent quality indeed. That is on faculty, at home I can only dream about those... But if I had a choice, first hotplate stirrer, then mantle for larger batch productions... ...and then I disappeared in the mist... Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood http://www.ebay.com/itm/Thermolyne-Ceramic-2-Hotplate-Stirre... So I decided on the hot place/stirrer, just won that. A Thermolyne Cimarec 2 for $102.50. There were a lot of pretty beat up ones on there around the same price, this one looks to be in pretty good condition. It was either not used a ton, or taken good care of. Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood Just got it from Fedex this morning. This thing really is in excellent condition and seems very well made. I think I made the right purchase. [Edited on 13-8-2013 by Thanatops1s] Blue Matter Hazard to Others Posts: 107 Registered: 20-6-2013 Location: US Member Is Offline Mood: Optimus I have a hotplate stirrer, a normal hot plate, and a mantle I use my normal hotplate for messy reactions that might spill the hotplate stirrer I only use on certain occasions when things need stirring and heating, but by far my favorite one is my electrothermal electromantle I got off eBay it heats up very fast and precisely imho I think hot plate is best for a while but eventually you should get a heating mantle MichiganMadScientist Hazard to Self Posts: 55 Registered: 22-7-2013 Member Is Offline Mood: No Mood I see that you have already made your purchase, and I too, have a high opinion of Thermolyne stirrer/hotplates. Most of mine are made by Corning or those (Talboys), but I'm conviced that Corning is putting a rather weak magnet on its stirrers these days. Thanatops1s Hazard to Self Posts: 54 Registered: 24-6-2013 Member Is Offline Mood: No Mood Quote: Originally posted by Blue Matter I think hot plate is best for a while but eventually you should get a heating mantle I definitely plan to in the future. However seeing as I'm just starting to build up my lab and am on a limited budget, versatility was my main goal here. Now once my ring stands and clamps come in, I can do my first distillation. For the first one, I'm just doing to use salt water(figure I might as well do something nice and safe first) but then it's going to be some nitric acid. Blue Matter Hazard to Others Posts: 107 Registered: 20-6-2013 Location: US Member Is Offline Mood: Optimus Its fun distilling alcohols and seeing how close to pure you can get them by calculating density and then comparing to concentration charts that's what I did when I got my setup. jamit National Hazard Posts: 372 Registered: 18-6-2010 Location: Midwest USA Member Is Offline Mood: No Mood Quote: Originally posted by Thanatops1s Just got it from Fedex this morning. This thing really is in excellent condition and seems very well made. I think I made the right purchase. [Edited on 13-8-2013 by Thanatops1s] Thermolyne is a good choice for hotplate stirrer. I have at one time or another owned all the brand names of hotplate/stirrer and the one that is the best is Corning... any of the PC-351, 320, 420 and 420D and even 620/D are all excellent choice. I also have VWR and Thermolyne and the newer models are good but the older model is just "so-so". As for heating mantles... I also own like 5 different sizes and types from Glas-col and Electromantle. But unless you are into organic chemistry... heating mantles are not as important as a good and reliable hotplate stirrer. Ebay is the best place to get these items. [Edited on 15-8-2013 by jamit] ChemSwede Harmless Posts: 26 Registered: 20-8-2013 Location: Sweden Member Is Offline Mood: No Mood Hello. I'm currently looking for a magnetic stirrer. My budget is set to around 200 EUR. The cheapest ones are the chinese stirrers on Ebay, but I've heard that they can malfunction quite often. I would prefer one from Europe. I've found this seller on Ebay, from the UK: http://viewitem.eim.ebay.se/-NEW-MAGNETIC-HOTPLATE-STIRRER--... It's from Maple Scientific. Seems good, but has anyone hade any experience with them? They also have a homepage: http://www.maplescientific.co.uk/ Any other advice about where to get a good and not too pricey stirrer w hotplate would be appreciated. [Edited on 24-9-2013 by ChemSwede] Variscite Hazard to Self Posts: 69 Registered: 21-5-2013 Member Is Offline Mood: diffusing Ive had my eye on the pc-351 for a little while and I wish to get it, the only problem ive heard is that it simply doesnt put out enough wattage to heat larger volumes of liquids efficiently(500 mL+). Does anyone have any input on this? Find me on Youtube at - Variscites-lab! http://www.youtube.com/user/Varisciteslab no videos yet, be some soon. MichiganMadScientist Hazard to Self Posts: 55 Registered: 22-7-2013 Member Is Offline Mood: No Mood Quote: Originally posted by Variscite Ive had my eye on the pc-351 for a little while and I wish to get it, the only problem ive heard is that it simply doesnt put out enough wattage to heat larger volumes of liquids efficiently(500 mL+). Does anyone have any input on this? Kind of an old thread, but I'll stick my two cents in.... Any of the Corning brand PC-series hotplate stirrers should be more than sufficient in terms of hearing. I've never had any problems with any of mine. If anything, my complaint is with the magnetic stirrer. For some reason the magnet on some of Corning's models are weak and fail to hold on to a Stir bar at high rpms. But again this is just my personal experience. Corning still makes a top notch product... You can always try wrapping the the hotplate and beaker with aluminum foil to trap heat around the beaker. But again, I personally doubt that you'll even need to resort to this... ChemSwede Harmless Posts: 26 Registered: 20-8-2013 Location: Sweden Member Is Offline Mood: No Mood I decided to buy the stirrer/hotplate from Maplescientific. I got it a bit cheaper after mailing with the seller, and it's in the EU, so no customs fee. It's chinese made, but so far no problems. Heating and stirring works fine. Dr.Bob International Hazard Posts: 2436 Registered: 26-1-2011 Location: USA - NC Member Is Offline Mood: No Mood Quote: Originally posted by Thanatops1s I've seen some videos on Youtube of people using a round bottom flask directly on a hot plate, is it just me or is that just asking to create a serious hot spot and break your flask? I will add my$0.02 here. I would never use a hotplate to directly heat a rbf. Depending on the brand, size and type of flask, there is a good chance of breaking the flask, bumping of the solvent, or other issues. There cannot be good thermal transfer with only one point of contact. I will use them for heating water in a beaker or erlenmeyer flask, but only slowly and gently. And any highly flammable solvent is best heated by water or oil bath (heating a pan of water/oil and then using that to heat the flask). That works fine for smaller rbfs. For larger ones (~1L and above), that can be harder, that is where a heating mantle starts to make sense. elementcollector1 International Hazard Posts: 2684 Registered: 28-12-2011 Location: The Known Universe Member Is Offline Mood: Molten Personally, I prefer a sand bath - it's easy to set up, non-flammable, and heats flasks very evenly. Elements Collected:52/87 Latest Acquired: Cl Next in Line: Nd Mailinmypocket International Hazard Posts: 1351 Registered: 12-5-2011 Member Is Offline Mood: No Mood If you use a coiled element hotplate sandbaths are great. Maybe it's my usual bad luck, but once I was heating a sand bath on a ceramic top hotplate and it burnt out. I'm not sure if sand baths retain too much heat and screw up the thermostats or something, or it might be bad luck. I wouldn't do that again though, personally. What a piss of that was! Lambda-Eyde International Hazard Posts: 856 Registered: 20-11-2008 Location: Norway Member Is Offline Mood: Cleaved Quote: Originally posted by Mailinmypocket If you use a coiled element hotplate sandbaths are great. Maybe it's my usual bad luck, but once I was heating a sand bath on a ceramic top hotplate and it burnt out. I'm not sure if sand baths retain too much heat and screw up the thermostats or something, or it might be bad luck. I wouldn't do that again though, personally. What a piss of that was! Sand is an excellent insulator, that's why. I only use sand for very high temperatures (>200 degrees). Below that I go for a heating bath or a mantle. Silicone oil is the ideal choice for a heating bath where water can't be used. Inert, non-flammable and very heat stable. You can get it for around 20\$ a liter on eBay. Expensive, but well worth the investment considering two liters should last indefinitely. This just in: 95,5 % of the world population lives outside the USA You should really listen to ABBA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2553357183933258, "perplexity": 5359.0222648932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00791.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/28919-help-please.html
Hey everyone! I was wondering if someone could suggest some good sites that would help me with my advanced mathematics class. I am learning about logic and proofs(i.e. Propositions, conditionals, quantifiers, basic proof methods, etc) and Set thoery (i.e. set operations, basic concepts, induction, indexed families of sets, etc). I really don't understand these topics and would like some addtional help. So anything you can suggest would be much appreciated! Thank you!! 2. Originally Posted by calcprincess88 Hey everyone! I was wondering if someone could suggest some good sites that would help me with my advanced mathematics class. I am learning about logic and proofs(i.e. Propositions, conditionals, quantifiers, basic proof methods, etc) and Set thoery (i.e. set operations, basic concepts, induction, indexed families of sets, etc). I really don't understand these topics and would like some addtional help. So anything you can suggest would be much appreciated! Thank you!! here is like a crash course in advanced calc these helped me a little bit when i was doing advanced calc (though i suppose you're doing some kind of prerequisite to advanced calc). you may also want to try the MIT OpenCourseware website. they often have video lectures for courses as well as course materials. there are a lot of things in the topics you mentioned. maybe if you took them one at a time and expound on your difficulties, we would be able to help more what text are you using? 3. Originally Posted by calcprincess88 Hey everyone! I was wondering if someone could suggest some good sites that would help me with my advanced mathematics class. I am learning about logic and proofs(i.e. Propositions, conditionals, quantifiers, basic proof methods, etc) and Set thoery (i.e. set operations, basic concepts, induction, indexed families of sets, etc). I really don't understand these topics and would like some addtional help. So anything you can suggest would be much appreciated! Thank you!! It sounds like you need to look at one or more of the books written to cover the transition between school and real maths. An inexpensive example is tim Gowers: "Mathematics: a very short introduction". (even though in the US this is twice the price it is in the UK) RonL 4. Originally Posted by Jhevon what text are you using? I am using A Transition to Advanced Mathematics Sixth Edition by Douglas Smith, maurice Eggen, and Richard St. Andre. 5. Originally Posted by CaptainBlank It sounds like you need to look at one or more of the books written to cover the transition between school and real maths. An inexpensive example is tim Gowers: "Mathematics: a very short introduction". It happens to be interesting (as I mentioned before). Of all Very Short Introduction series the top rated one on Amazon is the one on math. 6. Originally Posted by ThePerfectHacker It happens to be interesting (as I mentioned before). Of all Very Short Introduction series the top rated one on Amazon is the one on math. Given that it was written by Tim Gower that is not supprising RonL 7. wow. now you guys are getting me interested. i think i'm going to buy the book 8. Originally Posted by calcprincess88 I am using A Transition to Advanced Mathematics Sixth Edition by Douglas Smith, maurice Eggen, and Richard St. Andre. ah, ok. so i suppose this is for a prerequisite course to advanced calc? or you may call it analysis. at first i thought that was the same book i used, the title is the same, but the authors are different. some of the stuff on the site i gave you should be too advanced then, but some stuff will still help, i'm sure. 9. Originally Posted by CaptainBlank Given that it was written by Tim Gower that is not supprising I know him from this speech. 10. Originally Posted by ThePerfectHacker I know him from this speech. Yes, I have seen it. RonL 11. Thank you for your help! hopefully I'll get a good grade on my exam tomorrow 12. Originally Posted by calcprincess88 Thank you for your help! hopefully I'll get a good grade on my exam tomorrow Oh, you got help from the posts? that's good.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348513841629028, "perplexity": 913.7516632825236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00428-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/5282/fuzxxl?tab=activity&sort=revisions&page=2
# FUZxxl less info reputation 825 bio website fuz.su/~fuz location Berlin, Germany age 19 member for 3 years, 8 months seen Sep 15 at 6:33 profile views 288 I am a student of computer science and mathematics at the Humboldt University of Berlin. # 30 Revisions Apr4 revised wave equation and superposition Texify images Mar23 revised factorization of a^n+1? Encapsulate $\LaTeX$ into dollars. Mar20 revised Salt concentration as a function of time Use LaTeX formatting. Mar20 revised Prove inequality: When $n > 2$, $n! < {\left(\frac{n+2}{\sqrt{6}}\right)}^n$ beautify formatting Mar19 revised Create Fisheye from image Changed formating Mar6 revised Why does 0! = 1? added 122 characters in body Mar5 revised Equality of polynomials: formal vs. functional added 47 characters in body Mar4 revised Equality of polynomials: formal vs. functional added 73 characters in body Feb12 revised Does the formula $\sqrt{ 1 + 24n }$ always yield prime? added 24 characters in body Jan1 revised How to find a closed form for a sum involving $\max(x,y)$ Added more explanation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3797568380832672, "perplexity": 6866.830418351019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135777.13/warc/CC-MAIN-20140914011215-00145-ip-10-234-18-248.ec2.internal.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=ceq7sbga0hc5f0kne2enievl81&topic=1399.0
Author Topic: TT1 Problem 3 (morning)  (Read 1858 times) Victor Ivrii • Elder Member • Posts: 2599 • Karma: 0 TT1 Problem 3 (morning) « on: October 19, 2018, 03:55:03 AM » (a) Show that $u(x,y)= 8xy^3 -8x^3 y+ 5x$ is a harmonic function (b) Find the harmonic conjugate function $v(x,y)$. (c) Consider $u(x,y)+iv(x,y)$ and write it as a function $f(z)$ of $z=x+iy$. Vedant Shah • Jr. Member • Posts: 13 • Karma: 8 Re: TT1 Problem 3 (morning) « Reply #1 on: October 19, 2018, 09:23:34 AM » (a) $U_{xx} = \frac{\partial}{\partial x} \frac{\partial}{\partial x} U \\ = \frac{\partial}{\partial x} 8y^3 -24x^2 y +5 \\ = -48xy \\ \\ U_{yy} = \frac{\partial}{\partial y} \frac{\partial}{\partial y} U \\ = \frac{\partial}{\partial y} 24x y^2 - 8x^3 \\ = 48xy \\ U_{xx} + U_{yy} = -48xy + 48xy = 0$ Thus, U is harmonic. (b) By Cauchy Reimann: $V_y = U_x = 8y^3 - 24x^2 y + 5 \Rightarrow V = 2y^4 - 12 x^2 y^2 +5y +h(x)\\ V_x = -U_y = -24x y^2 + 8x^3 \Rightarrow V = -12 x^2 y^2 + 2x^4 + g(y) \\ \Rightarrow V(x,y) = 2x^4 - 12 x^2 y^2 + 2y^4 + 5y$ (c) $$f(x,y) = U(x,y) + iV(x,y) = 8xy^3 - 8x^3y+5x + i(2x^4 - 12 x^2 y^2 + 2y^4 + 5y) \\ f(x,y) = 2i(x^4 + 4ix^3y - 6x^2y^2 -4ixy^3 +y^4) + 5(x+iy) \\ f(x,y) = 2i(x+iy)^4 + 5(x+iy) \\ f(z) = 2i ({z}) ^4 + 5z\color{red}{+Ci}.$$ « Last Edit: October 20, 2018, 03:08:41 PM by Victor Ivrii »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696352481842041, "perplexity": 28612.238740409466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00309.warc.gz"}
http://science.sciencemag.org/content/163/3874/1458
Reports # Lactose Synthetase: Progesterone Inhibition of the Induction of α-Lactalbumin See allHide authors and affiliations Science  28 Mar 1969: Vol. 163, Issue 3874, pp. 1458-1460 DOI: 10.1126/science.163.3874.1458 ## Abstract Lactose synthesis in the mammary gland is dependent on the hormonally controlled synthesis of the two protein components of lactose synthetase, α-lactalbumin and a galactosyltransferase. Prolactin induces the synthesis of both proteins in mammary gland explants treated with insulin and hydrocortisone, but the induction kinetics cannot account for the asynchronous synthesis of the two proteins that are observed in vivo. Progesterone appears to take part in the control of lactose synthesis and acts to repress the formation of α-lactalbumin throughout pregnancy. At parturition, when the concentration of progesterone in the plasma decreases, the rate of α-lactalbumin synthesis increases.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441915154457092, "perplexity": 12437.291288285654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00365.warc.gz"}
https://electronics.stackexchange.com/questions/93710/how-do-dc-motors-work-with-respect-to-current-and-what-consequence-is-the-curre
How do DC motors work with respect to current, and what consequence is the current through them? Motors in general have always been a difficult subject that I cannot fully wrap my head around. Considering DC motors, what determines the rate at which the motor spins? It was my understanding that a permanent magnet created the field by which the current through the motor would act against via the conductor's induced field. As current increases then the induced field would thus increase - thereby increasing the rotational speed. However, I have read quite a bit of material that has led me to realized that I was incorrect. Namely, what is said at this link about DC motors: DC Motors For example, the same circuit schematic as above produces (considering back emf) the governing equations: So we have the current through the motor as a function of the back emf. Is the back emf a function of the load on the motor? Is it that the emf is generated in such a manner that current is limited by the lessening of the potential difference between it and the supplied voltage? The governing equations dictate that if the applied voltage is lowered then the back emf decreases further which in turn will decrease the current demanded by the circuit (through the motor). So, is the current through the motor just an indirect indication of the speed, or how does the current otherwise affect the operation? Are all DC motors (aside from brushless) similar? • "The emf is generated in such a manner that current is limited by the lessening of the potential difference between it and the supplied voltage?" CORRECT . "Is the current through the motor just an indirect indication of the speed?" CURRENT INDUCES TORQUE . If induced torque exceeds demand (load), speed can increase. As speed increases drag increases, counter EMF increases, and speed stabilizes. – Optionparty Dec 14 '13 at 20:29 At the end of the day you have to realise an Electrical Machine is basically an electrical energy to mechanical energy converter that utilises magnetic fields as the link. The magnetic field/flux is either generated by magnetics or via electromagnets. Motors in general have always been a difficult subject that I cannot fully wrap my head around. Considering DC motors, what determines the rate at which the motor spins? The rate the electrical machines rotor will spin is fundamentally the same for all electrical machine types (Induction, sync, SR, BLDC, BLAC, brushed, hysteresis ...). The rate of change of flux. How this rate of change is created is very specific to each machine. But basically by creating a magnetic flux on the stator & the rotor, the rotor will attempt to align itself just like magnets do. This Electro-magnetic torque manifests itself as mechanical torque (due to being perpendicular to a freely rotating axis) A torque acting on some inertia results in an acceleration that would want to take the rotor to infinite speed. It can't because of Lenz's law. You now have a rotating magnetic field passing by coils, this induces a voltage which opposes the voltage source you are using to force current into the electrical machine to generate a magnetic field to produce EM_Torque. The faster you go, the higher this voltage, the more it opposes the voltage source you are using. At some point you are no longer able to force current into the windings to create a magnetic field => no more EM_Torque --> no more rotor torque --> no more acceleration. As mentioned different machined create the changing flux by different mechanism • Brushed Machine (DC stator DC rotor) PM stator & a wound rotor, brushes are used to transfer electrical power to the rotor to create a DC current and thus a unidirectional magnetic field on the rotor. Apply the voltage source and the rotor will turn to align itself. This causes “commutation” to occur via the brushes and the rotor magnetic field is changed pushing it away from the present stator pole & attracting it to the next. More voltage ==> more EM_Torque ==> Faster commutation • Syncrounous Machine (AC stator DC rotor) Wound rotor, Wound stator. Power is usually transfered to the stator via a Main-Exciter (basically a rotating transformer) and it produced a DC current in the rotor that does not change direction. The Stator is then excited with an AC voltage source. The rotor will “lock onto” this varying stator field and will be essentially dragged around with it. To increase the speed of a Synchronous machine the frequency of the voltage source to the stator is changed: Higher == Faster. • BLAC, BLDC (AC stator, DC rotor) These are basically just Synchronous machines but they have permanent magnets on the rotor. Higher the stator frequency the higher the rotor speed. AC & DC just comes from the type of current control that is used. • Switched Reluctance (AC stator ... rotor) Beautiful machines, salient rotor NO WINDINGS, NO FIELD GENERATION. Wound stator. The stator is excited to produce a flux. An unaligned rotor will experience reluctance torque and attempt to align itself to minimise the reluctance in the present magnetic cct ==> mechanical torque ==> acceleration. Once alignment occurs you stop firing the stator and let the rotor “coast” for a short period before firing again • Induction machine. (AC stator, AC rotor) Wound stator, wound rotor. Unlike a synchronous machine however, the rotor windings are usually shorted (creating a squirrel cage like construction). Applying an AC voltage to the stator creates an AC magnetic field. This induces a voltage on the rotor & because it is shorted produces a current which in turn creates a magnetic field to be dragged around by the rotating stator field • Such a wonderfully throughout answer. So, as you say, the back emf is governed by Lenz's Law due to the permanent rotor magnet passing by the coils. So, is the back emf proportional to the rotors speed, or inversely related? – sherrellbc Dec 14 '13 at 21:33 • Generally yes. There are two "constants" associated with electrical machines: Kt (torque constant) and Ke (BackEMF constant). How "constant" they are or whether they depend on other machine specific characteristics is machine topology dependant. In its simplest form V = Kew and T = Kti – JonRB Dec 14 '13 at 21:46 I sometimes think of an ideal motor. Ideal in that it has no resistance, no friction. It acts like a generator with an output voltage of Kf, where K is a constant that depends on the motor design, and f is the frequency. This is not too bad for a permanent magnet motor. You apply a voltage and it draws current and spins up. It reaches a constant speed and no longer consumes power, so the current is now 0. The speed will be given by V = Kf so the generated voltage just opposes the applied voltage so that is why the current is 0. You can also use this to think about small deviations from the ideal and what they would do. Not very rigorous, but gives me some insight. You seem to be missing that a motor is also a mechanical machine. Newton's second law is very relevant, saying that force $F$ is the product of mass $m$ and acceleration $a$: $$F = ma$$ Here, the force is the torque produced by the motor. If this torque is equal to the torque offered by the load (by friction, for example) then there will be no net force, thus no acceleration, and the motor will spin at a constant speed, whatever that happens to be. If the motor's torque is more or less, the mechanical load accelerates or decelerates. The torque of the motor, as a first approximation, is proportional to the current through the motor. More current results in a stronger magnetic field, thus more torque. The motor may spin faster, if there is net torque in that direction, according to Newton's law above. As the motor spins, the rotor also moves through the stator field. It is, essentially, a generator at the same time as it is being a motor. The back-EMF is, as a first approximation, proportional to motor speed. The back-EMF appears in series with the inductance and resistance of the motor windings, and in the most intuitive situation where the mechanical load isn't forcing the motor to run backwards relative to the voltage applied to the motor terminals, the back-EMF opposes the applied voltage (Lenz's law). So, if you hook a motor to a 12V battery, and it's turning fast enough that the back-EMF is 10V, then it's like you are applying 2V to the motor. This explains the equation: $$I = \frac{V-\mathcal{E}}{R}$$ $V-\mathcal{E}$ is really just the net voltage applied to the motor, so this is just Ohm's law: $I=V/R$. We can do this because real motors have significant DC winding resistance. Here's a neat mental exercise: what would happen if you had a motor with zero winding resistance, and an ideal voltage source to power it? $$\lim_{R \searrow 0} \frac{V-\mathcal{E}}{R} = \infty$$ That is, as the winding resistance approaches 0, the current drawn by the motor approaches infinity. Since force is proportional to current, force also approaches infinity. Thus, a motor with no resistance has perfect speed regulation: any attempt to deviate from the speed set by the applied voltage results in an infinite current which results in infinite force to correct the speed discrepancy. The current through the voltage source will be proportional to the force required to maintain that speed. Real motors, having some resistance, only approximate this behavior. If connected to a voltage source (car battery) then if you try to slow the motor (brake it with your hand) the back-EMF decreases, which results in more net voltage across the windings, which increases current, which increases force, which makes the motor try to not be slowed by your hand. The extent to which the motor is good at doing this is inversely proportional to the voltage source and motor's series resistance. • What is meant by stator field? Are you referring to the rotor moving through the permanent magnetic field? Also, when back EMF is generated, the polarity is such that it opposed the polarity of the supplied voltage, V. When I have read of back-EMF protection, I noted that diodes are ubiquitously used and are placed in such a way to be reverse biased by the induced EMF voltage. How is it that when the main supply, V, is shut off that this reversed biased diode does anything to mitigate the damage to the motor? – sherrellbc Dec 14 '13 at 23:27 • If this torque is equal to the torque offered by the load (by friction, for example) then there will be no net force, thus no acceleration, and the motor will spin at a constant speed If there is no net force then shouldn't it be that there is no rotation at all? The applied torque equals the opposing frictional forces and the motor is stalled. What am I missing here? – sherrellbc Dec 14 '13 at 23:28 • The thought experiment regarding the limit of the winding resistance is an interesting one. That is to say that motors will draw more current in frigid environments providing identical mechanical work. Furthermore, this conclusion excites the mind even more considering that most electronic equipment tends to work efficiently in colder environments. – sherrellbc Dec 14 '13 at 23:30 • @sherrellbc The stator field is the other magnetic field that works against the rotor field. It could be permanent magnets, or it could be another winding, depending on the motor design. Regarding motion and net force, I think you should review Newton's laws of motion. Also, ambient temperature won't affect winding resistance very much, and that it draws more current doesn't mean it's "less efficient" or "worse". In fact, the resistance represents electrical energy lost to heat, so as resistance goes down, the motor becomes more efficient. – Phil Frost Dec 15 '13 at 9:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7655425071716309, "perplexity": 743.7780207440877}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00086.warc.gz"}
http://math.stackexchange.com/questions/672176/exponent-of-matrix
# Exponent of matrix I got this problem: Let $A \in M_n(\mathbb R)$ such that $A = -A^T$. Prove that $e^A$ is an orthogonal matrix. I succeeded in showing that $e^{(A^T)}e^A=I$ but did not succeed in proving that $e^{(A^T)}=(e^A)^T$. Any suggestions? Thanks to helpers! - Consider definition and formula : $e^A =\sum_{i=0}^\infty \frac{1}{i!}A^i$ and $(XY)^T = Y^TX^T$ – HK Lee Feb 11 '14 at 10:58 I still have a problem. Could you explain more? – Shlomi Feb 11 '14 at 11:07 I will add details : $$[e^A]^T = \bigg[\sum_{i=0}^\infty \frac{1}{i!}A^i \bigg]^T = \sum_{i=0 }^\infty \frac{1}{i!} (A^i)^T=\sum_{i=0 }^\infty \frac{1}{i!} (A^T)^i =e^{A^T}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473581910133362, "perplexity": 299.69998377754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00137-ip-10-239-7-51.ec2.internal.warc.gz"}
http://simonwinder.com/2015/07/energy-pooling-in-neural-networks-for-digit-recognition/
# Energy pooling in neural networks for digit recognition Having trained a two layer neural network to recognize handwritten digits with reasonable accuracy, as described in my previous blog post, I wanted to see what would happen if neurons were forced to pool the outputs of pairs of rectified units according to a fixed weight schedule. I created a network which is almost a three layer network where the output of pairs of the first layer rectified units are combined additively before being passed to the second fully connected layer. This means that the first layer has a 28×28 input and a 50 unit output (hidden layer) with rectified linear units, and then pairs of these units are averaged to reduce the neuron count to 25, and then the second fully connected layer reduces this down to 10. Finally the softmax classifier is applied. This architecture is identical to a two layer network where the second layer has half as many weights and pairs of input units share the same weight. I was interested to know how well this would perform compared to the earlier network with more parameters, and what kind of first layer weights would be learned. The results are that the network gets an error rate of 8.45% for $\pm 4$ pixel position jittered MNIST figures with added Gaussian noise (described previously). This compares to 7.9% for the earlier full network without shared weights (double the number of free parameters). However it learns interesting weight patterns: Notice that consecutive weight maps are similar, particularly in the orientation of the features that are selected for. Often the weights are complementary in sign or else are shifted spatially compared to each other. This has the effect of providing some position independence in a similar manner to complex cell sub-units in the visual cortex, because the rectified positive output of one unit will partially overlap with the output of the other one in the pair, increasing the area over which a positive response is generated, without changing the linear spatial selectivity. In a similar manner I tried actually using a geometric combination of pairs of outputs of the first layer linear units without the rectification layer. The formula is $y = \sqrt{x_1^2 + x_2^2 + \epsilon}$. (The addition of $\epsilon$ is necessary to remove the derivative singularity at zero.) Without the rectification layer, both positive and negative parts of the first layer unit responses can contribute positively to the hidden layer inputs which introduces a significant second order nonlinearity. In particular, if the weights of the input layer end up generating responses in spatial phase quadrature then the unit will be completely phase independent as in complex cell receptive fields. This contributes to spatial location invariance. The results show that magically most of the input weights for the geometrically summed units do end up nicely in phase quadrature: The error rate for this network is 7.53%, somewhat better than before. This is probably because the geometric addition introduces better spatial invariance than adding rectified outputs. Incidentally, if you run this on the raw MNIST data (without added distortions to make the recognition harder), the test error rate is a very respectable 2.1%, with the training error down at 0.19%. This is a good result for a two layer net with 50 hidden units. Next, I will be exploring convolutional layers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7044785618782043, "perplexity": 755.2095867308589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00274-ip-10-145-167-34.ec2.internal.warc.gz"}
https://core.ac.uk/display/39720034
Location of Repository Abstract This thesis discusses one of the probes of a Quark Gluon Plasma (QGP):\ud direct photon emission. The QGP is a state of matter that is\ud hypothesized to exist at high baryon densities and high temperature.\ud These circumstances are only available for experiments in heavy-ion\ud collisions, and even there the presence of the QGP cannot be measured\ud directly. Several indications of a QGP have already been detected in\ud experiments at the SPS collider at CERN, but the evidence is still\ud inconclusive whether the QGP has been seen.\ud \ud \ud \ud The direct photon signal consists of the photons emitted in the early\ud phases of a collision, partly in thermal processes. The spectrum of\ud these photons is highly dependent on the thermal evolution of the\ud medium, and a phase transition from the QGP to hadronic matter will have\ud a detectable effect on this thermal spectrum. Observation of the direct\ud signal is complicated by the presence of a high number of other photon\ud sources during the collision, mainly the decay of neutral mesons, in the\ud later phases of the collision. One way that this background can be\ud estimated is by an invariant-mass analysis, in which the invariant mass\ud is calculated of all pairs of detected photons.\ud \ud \ud \ud in this thesis, an alternative method is proposed to eliminate the decay\ud photons from the detected photon signal. The method depends on the\ud measurement of the photon spectrum for several centrality classes. By\ud subtracting a scaled peripheral photon spectrum from the central photon\ud spectrum, the decay photon spectrum can be eliminated, and the remaining\ud signal consists of direct photons only. Because this analysis uses the\ud ratio of measured spectra at different centralities, it is less\ud sensitive to a number of systematic effects, compared to the invariant\ud mass analysis.\ud \ud \ud \ud Our inclusive photon analysis has been performed on the photon data of\ud Pb+Pb collisions in the WA98 experiment at a beam energy of 158 GeV per\ud nucleon.. Using our method, it was possible to produce a direct photon\ud spectrum for transverse photon momenta between 0.5 GeV/c and 2.0 GeV/c.\ud For the lower part of this interval, this is the first time that a\ud direct photon signal has been extracted. At higher momenta, the results\ud show a good correspondence with earlier results of the WA98 invariant\ud mass analysis.\ud \ud \ud \ud The results are compared with the outcome of a simple hydrodynamical\ud model first proposed by Bjorken. This shows that the direct photon\ud signal that we found is compatible with an initial temperature of about\ud 300 MeV, and a transition temperature of 180 MeV. With these\ud parameters, the model shows that most of the thermal photons originate\ud in the QGP/hadron gas mix during the phase transition, or in the\ud following hadron gas phase Topics: Natuur- en Sterrenkunde, nuclear physics, direct photons, heavy-ion collisions, quark gluon plasma Publisher: Utrecht University Year: 2007 OAI identifier: oai:dspace.library.uu.nl:1874/23457
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877480268478394, "perplexity": 2785.8061053846122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00299.warc.gz"}
https://worldwidescience.org/topicpages/s/survey+stations+spaced.html
#### Sample records for survey stations spaced 1. Radiation survey in the International Space Station Directory of Open Access Journals (Sweden) Narici Livio 2015-01-01 Full Text Available The project ALTEA-shield/survey is part of an European Space Agency (ESA – ILSRA (International Life Science Research Announcement program and provides a detailed study of the International Space Station (ISS (USLab and partly Columbus radiation environment. The experiment spans over 2 years, from September 20, 2010 to September 30, 2012, for a total of about 1.5 years of effective measurements. The ALTEA detector system measures all heavy ions above helium and, to a limited extent, hydrogen and helium (respectively, in 25 Mev–45 MeV and 25 MeV/n–250 MeV/n energy windows while tracking every individual particle. It measures independently the radiation along the three ISS coordinate axes. The data presented consist of flux, dose, and dose equivalent over the time of investigation, at the different surveyed locations. Data are selected from the different geographic regions (low and high latitudes and South Atlantic Anomaly, SAA. Even with a limited acceptance window for the proton contribution, the flux/dose/dose equivalent results as well as the radiation spectra provide information on how the radiation risks change in the different surveyed sites. The large changes in radiation environment found among the measured sites, due to the different shield/mass distribution, require a detailed Computer-Aided Design (CAD model to be used together with these measurements for the validation of radiation models in space habitats. Altitude also affects measured radiation, especially in the SAA. In the period of measurements, the altitude (averaged over each minute ranged from 339 km to 447 km. Measurements show the significant shielding effect of the ISS truss, responsible for a consistent amount of reduction in dose equivalent (and so in radiation quality. Measured Galactic Cosmic Ray (GCR dose rates at high latitude range from 0.354 ± 0.002 nGy/s to 0.770 ± 0.006 nGy/s while dose equivalent from 1.21 ± 0.04 nSv/s to 6.05 ± 0 2. Free-free and fixed base modal survey tests of the Space Station Common Module Prototype Science.gov (United States) Driskill, T. C.; Anderson, J. B.; Coleman, A. D. This paper describes the testing aspects and the problems encountered during the free-free and fixed base modal surveys completed on the original Space Station Common Module Prototype (CMP). The CMP is a 40-ft long by 14.5-ft diameter 'waffle-grid' cylinder built by the Boeing Company and housed at the Marshall Space Flight Center (MSFC) near Huntsville, AL. The CMP modal survey tests were conducted at MSFC by the Dynamics Test Branch. The free-free modal survey tests (June '90 to Sept. '90) included interface verification tests (IFVT), often referred to as impedance measurements, mass-additive testing and linearity studies. The fixed base modal survey tests (Feb. '91 to April '91), including linearity studies, were conducted in a fixture designed to constrain the CMP in 7 total degrees-of-freedom at five trunnion interfaces (two primary, two secondary, and the keel). The fixture also incorporated an airbag off-load system designed to alleviate the non-linear effects of friction in the primary and secondary trunnion interfaces. Numerous test configurations were performed with the objective of providing a modal data base for evaluating the various testing methodologies to verify dynamic finite element models used for input to coupled load analysis. 3. Free-free and fixed base modal survey tests of the Space Station Common Module Prototype Science.gov (United States) Driskill, T. C.; Anderson, J. B.; Coleman, A. D. 1992-01-01 This paper describes the testing aspects and the problems encountered during the free-free and fixed base modal surveys completed on the original Space Station Common Module Prototype (CMP). The CMP is a 40-ft long by 14.5-ft diameter 'waffle-grid' cylinder built by the Boeing Company and housed at the Marshall Space Flight Center (MSFC) near Huntsville, AL. The CMP modal survey tests were conducted at MSFC by the Dynamics Test Branch. The free-free modal survey tests (June '90 to Sept. '90) included interface verification tests (IFVT), often referred to as impedance measurements, mass-additive testing and linearity studies. The fixed base modal survey tests (Feb. '91 to April '91), including linearity studies, were conducted in a fixture designed to constrain the CMP in 7 total degrees-of-freedom at five trunnion interfaces (two primary, two secondary, and the keel). The fixture also incorporated an airbag off-load system designed to alleviate the non-linear effects of friction in the primary and secondary trunnion interfaces. Numerous test configurations were performed with the objective of providing a modal data base for evaluating the various testing methodologies to verify dynamic finite element models used for input to coupled load analysis. 4. Space Station operations Science.gov (United States) Gray, R. H. 1985-01-01 An evaluation of the success of the Space Station will be based on the service provided to the customers by the Station crew, the productivity of the crew, and the costs of operation. Attention is given to details regarding Space Station operations, a summary of operational philosophies and requirements, logistics and resupply operations, prelaunch processing and launch operations, on-orbit operations, aspects of maintainability and maintenance, habitability, and questions of medical care. A logistics module concept is considered along with a logistics module processing timeline, a habitability module concept, and a Space Station rescue mission. 5. Space Station galley design Science.gov (United States) Trabanino, Rudy; Murphy, George L.; Yakut, M. M. 1986-01-01 An Advanced Food Hardware System galley for the initial operating capability (IOC) Space Station is discussed. Space Station will employ food hardware items that have never been flown in space, such as a dishwasher, microwave oven, blender/mixer, bulk food and beverage dispensers, automated food inventory management, a trash compactor, and an advanced technology refrigerator/freezer. These new technologies and designs are described and the trades, design, development, and testing associated with each are summarized. 6. Space station operations management Science.gov (United States) Cannon, Kathleen V. 1989-01-01 Space Station Freedom operations management concepts must be responsive to the unique challenges presented by the permanently manned international laboratory. Space Station Freedom will be assembled over a three year period where the operational environment will change as significant capability plateaus are reached. First Element Launch, Man-Tended Capability, and Permanent Manned Capability, represent milestones in operational capability that is increasing toward mature operations capability. Operations management concepts are being developed to accomodate the varying operational capabilities during assembly, as well as the mature operational environment. This paper describes operations management concepts designed to accomodate the uniqueness of Space Station Freedoom, utilizing tools and processes that seek to control operations costs. 7. The organized Space Station Science.gov (United States) Lew, Leong W. Space Station organization designers should consider the onboard stowage system to be an integral part of the environment structured for productive working conditions. In order to achieve this, it is essential to use an efficient inventory control system able to track approximately 50,000 items over a 90-day period, while maintaining peak crew performance. It is noted that a state-of-the-art bar-code inventory management system cannot satisfy all Space Station requirements, such as the location of a critical missing item. 8. Space Station Water Quality Science.gov (United States) Willis, Charles E. (Editor) 1987-01-01 The manned Space Station will exist as an isolated system for periods of up to 90 days. During this period, safe drinking water and breathable air must be provided for an eight member crew. Because of the large mass involved, it is not practical to consider supplying the Space Station with water from Earth. Therefore, it is necessary to depend upon recycled water to meet both the human and nonhuman water needs on the station. Sources of water that will be recycled include hygiene water, urine, and cabin humidity condensate. A certain amount of fresh water can be produced by CO2 reduction process. Additional fresh water will be introduced into the total pool by way of food, because of the free water contained in food and the water liberated by metabolic oxidation of the food. A panel of scientists and engineers with extensive experience in the various aspects of wastewater reuse was assembled for a 2 day workshop at NASA-Johnson. The panel included individuals with expertise in toxicology, chemistry, microbiology, and sanitary engineering. A review of Space Station water reclamation systems was provided. 9. A Survey of Staphylococcus sp and its Methicillin Resistance aboard the International Space Station Science.gov (United States) Bassinger, V. J.; Fontenot, S. L.; Castro, V. A.; Ott, C.; Healy, M.; Pierson, D. L. 2004-01-01 Background: Within the past few years, methicillin-resistant Staphylococcus aureus has emerged in environments with susceptible hosts in close proximity, such as hospitals and nursing homes. As the International Space Station (ISS) represents a semi-closed environment with a high level of crewmember interaction, an evaluation of isolates of clinical and environmental Staphylococcus aureus and coagulase negative Staphylococcus was performed to determine if this trend was also present in astronauts occupying ISS or on surfaces of the space station itself. Methods: Identification of isolates was completed using VITEK (GPI cards, BioMerieux), 16S ribosomal DNA analysis (MicroSeq 500, ABI), and Rep-PCR DNA fingerprinting (Divemilab, Bacterial Barcodes). Susceptibility tests were performed using VITEK (GPS-105 cards, BioMerieux) and resistance characteristics were evaluated by testing for the presence of the mecA gene (PBP2' MRSA test kit, Oxoid). Results: Rep-PCR analysis indicated the transfer of S. aureus between crewmembers and between crewmembers and ISS surfaces. While a variety of S. aureus were identified from both the crewmembers and environment, evaluations of the microbial population indicated minimal methicillin resistance. Results of this study indicated that within the semi-closed ISS environment, transfer of bacteria between crewmembers and their environment has been occurring, although there was no indication of a high concentration of methicillin resistant Staphylococcus species. Conclusions: While this study suggests that the spread of methicillin resistant S. aureus is not currently a concern aboard ISS, the increasing incidence of Earth-based antibiotic resistance indicates a need for continued clinical and environmental monitoring. 10. Space Station end effector strategy study Science.gov (United States) Katzberg, Stephen J.; Jensen, Robert L.; Willshire, Kelli F.; Satterthwaite, Robert E. 1987-01-01 The results of a study are presented for terminology definition, identification of functional requirements, technolgy assessment, and proposed end effector development strategies for the Space Station Program. The study is composed of a survey of available or under-developed end effector technology, identification of requirements from baselined Space Station documents, a comparative assessment of the match between technology and requirements, and recommended strategies for end effector development for the Space Station Program. 11. Space teleoperations technology for Space Station evolution Science.gov (United States) Reuter, Gerald J. 1990-01-01 Viewgraphs on space teleoperations technology for space station evolution are presented. Topics covered include: shuttle remote manipulator system; mobile servicing center functions; mobile servicing center technology; flight telerobotic servicer-telerobot; flight telerobotic servicer technology; technologies required for space station assembly; teleoperation applications; and technology needs for space station evolution. 12. Space Station fluid management logistics Science.gov (United States) Dominick, Sam M. 1990-01-01 Viewgraphs and discussion on space station fluid management logistics are presented. Topics covered include: fluid management logistics - issues for Space Station Freedom evolution; current fluid logistics approach; evolution of Space Station Freedom fluid resupply; launch vehicle evolution; ELV logistics system approach; logistics carrier configuration; expendable fluid/propellant carrier description; fluid carrier design concept; logistics carrier orbital operations; carrier operations at space station; summary/status of orbital fluid transfer techniques; Soviet progress tanker system; and Soviet propellant resupply system observations. 13. Build Your Own Space Station Science.gov (United States) Bolinger, Allison 2016-01-01 This presentation will be used to educate elementary students on the purposes and components of the International Space Station and then allow them to build their own space stations with household objects and then present details on their space stations to the rest of the group. 14. Space stations systems and utilization CERN Document Server Messerschmid, Ernst 1999-01-01 The design of space stations like the recently launched ISS is a highly complex and interdisciplinary task. This book describes component technologies, system integration, and the potential usage of space stations in general and of the ISS in particular. It so adresses students and engineers in space technology. Ernst Messerschmid holds the chair of space systems at the University of Stuttgart and was one of the first German astronauts. 15. Hey] What's Space Station Freedom? Science.gov (United States) Vonehrenfried, Dutch This video, 'Hey] What's Space Station Freedom?', has been produced as a classroom tool geared toward middle school children. There are three segments to this video. Segment One is a message to teachers presented by Dr. Jeannine Duane, New Jersey, 'Teacher in Space'. Segment Two is a brief Social Studies section and features a series of Presidential Announcements by President John F. Kennedy (May 1961), President Ronald Reagan (July 1982), and President George Bush (July 1989). These historical announcements are speeches concerning the present and future objectives of the United States' space programs. In the last segment, Charlie Walker, former Space Shuttle astronaut, teaches a group of middle school children, through models, computer animation, and actual footage, what Space Station Freedom is, who is involved in its construction, how it is to be built, what each of the modules on the station is for, and how long and in what sequence this construction will occur. There is a brief animation segment where, through the use of cartoons, the children fly up to Space Station Freedom as astronauts, perform several experiments and are given a tour of the station, and fly back to Earth. Space Station Freedom will take four years to build and will have three lab modules, one from ESA and another from Japan, and one habitation module for the astronauts to live in. 16. Hey! What's Space Station Freedom? Science.gov (United States) Vonehrenfried, Dutch 1992-01-01 This video, 'Hey! What's Space Station Freedom?', has been produced as a classroom tool geared toward middle school children. There are three segments to this video. Segment One is a message to teachers presented by Dr. Jeannine Duane, New Jersey, 'Teacher in Space'. Segment Two is a brief Social Studies section and features a series of Presidential Announcements by President John F. Kennedy (May 1961), President Ronald Reagan (July 1982), and President George Bush (July 1989). These historical announcements are speeches concerning the present and future objectives of the United States' space programs. In the last segment, Charlie Walker, former Space Shuttle astronaut, teaches a group of middle school children, through models, computer animation, and actual footage, what Space Station Freedom is, who is involved in its construction, how it is to be built, what each of the modules on the station is for, and how long and in what sequence this construction will occur. There is a brief animation segment where, through the use of cartoons, the children fly up to Space Station Freedom as astronauts, perform several experiments and are given a tour of the station, and fly back to Earth. Space Station Freedom will take four years to build and will have three lab modules, one from ESA and another from Japan, and one habitation module for the astronauts to live in. 17. Conveying International Space Station Science Science.gov (United States) Goza, Sharon P. 2017-01-01 Over 1,000 experiments have been completed, and others are being conducted and planed on the International Space Station (ISS). In order to make the information on these experiments accessible, the IGOAL develops mobile applications to easily access this content and video products to convey high level concepts. This presentation will feature the Space Station Research Explorer as well as several publicly available video examples. 18. Internationalization of the Space Station Science.gov (United States) Lottmann, R. V. 1985-01-01 Attention is given to the NASA Space Station system elements whose production is under consideration by potential foreign partners. The ESA's Columbus Program declaration encompasses studies of pressurized modules, unmanned payload carriers, and ground support facilities. Canada has expressed interest in construction and servicing facilities, solar arrays, and remote sensing facilities. Japanese studies concern a multipurpose experimental module concept. Each of these foreign investments would expand Space Station capabilities and lay the groundwork for long term partnerships. 19. Space station molecular sieve development Science.gov (United States) Chang, C.; Rousseau, J. 1986-01-01 An essential function of a space environmental control system is the removal of carbon dioxide (CO2) from the atmosphere to control the partial pressure of this gas at levels lower than 3 mm Hg. The use of regenerable solid adsorbents for this purpose was demonstrated effectively during the Skylab mission. Earlier sorbent systems used zeolite molecular sieves. The carbon molecular sieve is a hydrophobic adsorbent with excellent potential for space station application. Although carbon molecular sieves were synthesized and investigated, these sieves were designed to simulate the sieving properties of 5A zeolite and for O2/N2 separation. This program was designed to develop hydrophobic carbon molecular sieves for CO2 removal from a space station crew environment. It is a first phase effort involved in sorbent material development and in demonstrating the utility of such a material for CO2 removal on space stations. The sieve must incorporate the following requirements: it must be hydrophobic; it must have high dynamic capacity for carbon dioxide at the low partial pressure of the space station atmosphere; and it must be chemiclly stable and will not generate contaminants. 20. The International Space Station Habitat Science.gov (United States) Watson, Patricia Mendoza; Engle, Mike 2003-01-01 The International Space Station (ISS) is an engineering project unlike any other. The vehicle is inhabited and operational as it is constructed. The habitability resources available to the crew are the sleep quarters, the galley, the waste and hygiene compartment, and exercise equipment. These items are mainly in the Russian Service Module and their placement is awkward for the crew to use and work around. ISS assembly will continue with the truss build and the addition of the International Partner Laboratories. Prior to the addition of the International Partner Laboratories. Node 2 will be added. The Node 2 module will provide additional stowage volume and room for more crew sleep quarters. The purpose of the ISS is to perform research and a major area of emphasis is on the effects of long duration space flight on humans, as result of this research the habitability requirements for the International Space Station crews will be determined. 1. Space Station tethered elevator system Science.gov (United States) Haddock, Michael H.; Anderson, Loren A.; Hosterman, K.; Decresie, E.; Miranda, P.; Hamilton, R. 1989-01-01 The optimized conceptual engineering design of a space station tethered elevator is presented. The tethered elevator is an unmanned, mobile structure which operates on a ten-kilometer tether spanning the distance between Space Station Freedom and a platform. Its capabilities include providing access to residual gravity levels, remote servicing, and transportation to any point along a tether. The report discusses the potential uses, parameters, and evolution of the spacecraft design. Emphasis is placed on the elevator's structural configuration and three major subsystem designs. First, the design of elevator robotics used to aid in elevator operations and tethered experimentation is presented. Second, the design of drive mechanisms used to propel the vehicle is discussed. Third, the design of an onboard self-sufficient power generation and transmission system is addressed. 2. 47 CFR 97.207 - Space station. Science.gov (United States) 2010-10-01 ... space station licensee has assessed and limited the amount of debris released in a planned manner during... space station becoming a source of debris by collisions with large debris or other operational space... 47 Telecommunication 5 2010-10-01 2010-10-01 false Space station. 97.207 Section 97.207... 3. International Space Station technology demonstrations Science.gov (United States) Holt, Alan C. 1998-01-01 4. Live From Space Station Outreach Payload Project Data.gov (United States) National Aeronautics and Space Administration — The Live from Space Station? Outreach Payload (LFSSOP) is a technologically challenging, exciting opportunity for university students to conduct significant research... 5. Space Station Electrical Power System Science.gov (United States) Labus, Thomas L.; Cochran, Thomas H. 1987-01-01 The purpose of this paper is to describe the design of the Space Station Electrical Power System. This includes the Photovoltaic and Solar Dynamic Power Modules as well as the Power Management and Distribution System (PMAD). In addition, two programmatic options for developing the Electrical Power System will be presented. One approach is defined as the Enhanced Configuration and represents the results of the Phase B studies conducted by the NASA Lewis Research Center over the last two years. Another option, the Phased Program, represents a more measured approach to reaching about the same capability as the Enhanced Configuration. 6. International Space Station: Expedition 2000 Science.gov (United States) 2000-01-01 Live footage of the International Space Station (ISS) presents an inside look at the groundwork and assembly of the ISS. Footage includes both animation and live shots of a Space Shuttle liftoff. Phil West, Engineer; Dr. Catherine Clark, Chief Scientist ISS; and Joe Edwards, Astronaut, narrate the video. The first topic of discussion is People and Communications. Good communication is a key component in our ISS endeavor. Dr. Catherine Clark uses two soup cans attached by a string to demonstrate communication. Bill Nye the Science Guy talks briefly about science aboard the ISS. Charlie Spencer, Manager of Space Station Simulators, talks about communication aboard the ISS. The second topic of discussion is Engineering. Bonnie Dunbar, Astronaut at Johnson Space Flight Center, gives a tour of the Japanese Experiment Module (JEM). She takes us inside Node 2 and the U.S. Lab Destiny. She also shows where protein crystal growth experiments are performed. Audio terminal units are used for communication in the JEM. A demonstration of solar arrays and how they are tested is shown. Alan Bell, Project Manager MRMDF (Mobile Remote Manipulator Development Facility), describes the robot arm that is used on the ISS and how it maneuvers the Space Station. The third topic of discussion is Science and Technology. Dr. Catherine Clark, using a balloon attached to a weight, drops the apparatus to the ground to demonstrate Microgravity. The bursting of the balloon is observed. Sherri Dunnette, Imaging Technologist, describes the various cameras that are used in space. The types of still cameras used are: 1) 35 mm, 2) medium format cameras, 3) large format cameras, 4) video cameras, and 5) the DV camera. Kumar Krishen, Chief Technologist ISS, explains inframetrics, infrared vision cameras and how they perform. The Short Arm Centrifuge is shown by Dr. Millard Reske, Senior Life Scientist, to subject astronauts to forces greater than 1-g. Reske is interested in the physiological effects of 7. Space Biosciences, Space-X, and the International Space Station Science.gov (United States) Wigley, Cecilia 2014-01-01 Space Biosciences Research on the International Space Station uses living organisms to study a variety of research questions. To enhance our understanding of fundamental biological processes. To develop the fundations for a safe, productive human exploration of space. To improve the quality of life on earth. 8. The US Space Station programme Science.gov (United States) Hodge, J. D. 1985-01-01 The Manned Space Station (MSS) involves NASA, and other countries, in the operation, maintenance and expansion of a permanent space facility. The extensive use of automation and robotics will advance those fields, and experimentation will be carried out in scientific and potentially commercial projects. The MSS will provide a base for astronomical observations, spacecraft assembly, refurbishment and repair, transportation intersection, staging for interplanetary exploration, and storage. Finally, MSS operations will be performed semi-autonomously from ground control. Phase B analysis is nearing completion, and precedes hardware development. Studies are being performed on generic advanced technologies which can reliably and flexibly be incorporated into the MSS, such as attitude control and stabilization, power, thermal, environmental and life support control, auxiliary propulsion, data management, etc. Guidelines are also being formulated regarding the areas of participation by other nations. 9. Space station operating system study Science.gov (United States) Horn, Albert E.; Harwell, Morris C. 1988-01-01 The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study. 10. Space Station Biological Research Project Science.gov (United States) Johnson, Catherine C.; Hargens, Alan R.; Wade, Charles E. 1995-01-01 NASA Ames Research Center is responsible for the development of the Space Station Biological Research Project (SSBRP) which will support non-human life sciences research on the International Space Station Alpha (ISSA). The SSBRP is designed to support both basic research to understand the effect of altered gravity fields on biological systems and applied research to investigate the effects of space flight on biological systems. The SSBRP will provide the necessary habitats to support avian and reptile eggs, cells and tissues, plants and rodents. In addition a habitat to support aquatic specimens will be provided by our international partners. Habitats will be mounted in ISSA compatible racks at u-g and will also be mounted on a 2.5 m diameter centrifuge except for the egg incubator which has an internal centrifuge. The 2.5 m centrifuge will provide artificial gravity levels over the range of 0.01 G to 2 G. The current schedule is to launch the first rack in 1999, the Life Sciences glovebox and a second rack early in 2001, a 4 habitat 2.5 in centrifuge later the same year in its own module, and to upgrade the centrifuge to 8 habitats in 2004. The rodent habitats will be derived from the Advanced Animal Habitat currently under development for the Shuttle program and will be capable of housing either rats or mice individually or in groups (6 rats/group and at least 12 mice/group). The egg incubator will be an upgraded Avian Development Facility also developed for the Shuttle program through a Small Business and Innovative Research grant. The Space Tissue Loss cell culture apparatus, developed by Walter Reed Army Institute of Research, is being considered for the cell and tissue culture habitat. The Life Sciences Glovebox is crucial to all life sciences experiments for specimen manipulation and performance of science procedures. It will provide two levels of containment between the work volume and the crew through the use of seals and negative pressure. The glovebox 11. Space Station Freedom - What if...? Science.gov (United States) Grey, Jerry 1992-10-01 The use of novel structural designs and the Energia launch system of the Commonwealth of Independent States for the Space Station Freedom (SSF) program is evaluated by means of a concept analysis. The analysis assumes that: (1) Energia is used for all cargo and logistics resupply missions; (2) the shuttles are launched from the U.S.; and (3) an eight-person assured crew return vehicle is available. This launch/supply scenario reduces the deployment risk from 30 launches to a total of only eight launches reducing the cost by about 15 billion U.S. dollars. The scenario also significantly increases the expected habitable and storage volumes and decreases the deployment time by three years over previous scenarios. The specific payloads are given for Energia launches emphasizing a proposed design for the common module cluster that incorporates direct structural attachment to the truss at midspan. The design is shown to facilitate the accommodation of additional service hangars and to provide a more efficient program for spacecraft habitable space. 12. Space station induced electromagnetic effects Science.gov (United States) Singh, N. 1988-01-01 Several mechanisms which can cause electric (E) and magnetic (B) field contaminations of the Space Station environment are identified. The level of E and B fields generated by some of them such as the motion of the vehicle across the ambient magnetic field B(0) and the 20-kHz leakage currents and charges can be controlled by proper design considerations. On the other hand, there are some mechanisms which are inherent to the interaction of large vehicles with the plasma and probably their contributions to E and B fields cannot be controlled; these include plasma waves in the wake and ram directions and the effects of the volume current generated by the ionization of neutrals. The interaction of high-voltage solar arrays with plasma is yet another rich source of E and B fields and it is probably uncontrollable. Wherever possible, quantitative estimates of E and B are given. A set of recommendations is included for further study in areas where indepth knowledge is seriously lacking. 13. STS-97 Onboard Photograph - International Space Station Science.gov (United States) 2000-01-01 This image of the International Space Station in orbit was taken from the Space Shuttle Endeavour prior to docking. Most of the Station's components are clearly visible in this photograph. They are the Node 1 or Unity Module docked with the Functional Cargo Block or Zarya (top) that is linked to the Zvezda Service Module. The Soyuz spacecraft is at the bottom. 14. Welding/brazing for Space Station repair Science.gov (United States) Dickinson, David W.; Babel, H. W.; Conaway, H. R.; Hooper, W. H. 1990-01-01 Viewgraphs on welding/brazing for space station repair are presented. Topics covered include: fabrication and repair candidates; debris penetration of module panel; welded repair patch; mechanical assembly of utility fluid line; space station utility systems; Soviet aerospace fabrication - an overview; and processes under consideration. 15. Documentation of the space station/aircraft acoustic apparatus Science.gov (United States) Clevenson, Sherman A. 1987-01-01 This paper documents the design and construction of the Space Station/Aircraft Acoustic Apparatus (SS/AAA). Its capabilities both as a space station acoustic simulator and as an aircraft acoustic simulator are described. Also indicated are the considerations which ultimately resulted in man-rating the SS/AAA. In addition, the results of noise surveys and reverberation time and absorption coefficient measurements are included. 16. Artificial intelligence - NASA. [robotics for Space Station Science.gov (United States) Erickson, J. D. 1985-01-01 Artificial Intelligence (AI) represents a vital common space support element needed to enable the civil space program and commercial space program to perform their missions successfully. It is pointed out that advances in AI stimulated by the Space Station Program could benefit the U.S. in many ways. A fundamental challenge for the civil space program is to meet the needs of the customers and users of space with facilities enabling maximum productivity and having low start-up costs, and low annual operating costs. An effective way to meet this challenge may involve a man-machine system in which artificial intelligence, robotics, and advanced automation are integrated into high reliability organizations. Attention is given to the benefits, NASA strategy for AI, candidate space station systems, the Space Station as a stepping stone, and the commercialization of space. 17. Progressive autonomy. [for space station systems operation Science.gov (United States) Anderson, J. L. 1984-01-01 The present investigation is concerned with the evolution of a space station in terms of the progression of autonomy, as systems perspectives and architectural concepts permit. The distinction between automation and autonomy is considered along with the evolution of autonomy, and the evolution of automation in station operations. Attention is given to the startup of a complex technological system, aspects of station control, questions of crew operational support, factors regarding the habitability of a space station, system design philosophy for autonomy, evolvability, latent capability, stage commonality, and multiple modularity. It is concluded that an evolutionary space station operating over a period of 10-20 years with a great increase in capability over that time will require a design philosophy which is more flexible and open-ended than for previous space systems. 18. Raising the AIQ of the Space Station Science.gov (United States) Lum, Henry; Heer, Ewald 1987-01-01 Expert systems and robotics technologies are to be significantly advanced during the Space Station program. Artificial intelligence systems (AI) on the Station will include 'scars', which will permit upgrading the AI capabilities as the Station evolves to autonomy. NASA-Ames is managing the development of the AI systems through a series of demonstrations, the first, controlling a single subsystem, to be performed in 1988. The capabilities being integrated into the first demonstration are described; however, machine learning and goal-driven natural language understanding will not reach a prototype stage until the mid-1990s. Steps which will be taken to endow the computer systems with the ability to move from heuristic reasoning to factual knowledge, i.e., learning from experience, are explored. It is noted that the development of Space Station expert systems depends on the development of experts in Station operations, which will not happen until the Station has been used extensively by crew members. 19. A Simple Space Station Rescue Vehicle Science.gov (United States) Petro, Andrew 1995-01-01 Early in the development of the Space Station it was determined that there is a need to have a vehicle which could be used in the event that the Space Station crew need to quickly depart and return to Earth when the Space Shuttle is not available. Unplanned return missions might occur because of a medical emergency, a major Space Station failure, or if there is a long-term interruption in the delivery of logistics to the Station. The rescue vehicle ms envisioned as a simple capsule-type spacecraft which would be maintained in a dormant state at the Station for several years and be quickly activated by the crew when needed. During the assembly phase for the International Space Station, unplanned return missions will be performed by the Russian Soyuz vehicle, which can return up to three people. When the Station assembly is complete there will be a need for rescue capability for up to six people. This need might be met by an additional Soyuz vehicle or by a new vehicle which might come from a variety of sources. This paper describes one candidate concept for a Space Station rescue vehicle. The proposed rescue vehicle design has the blunt-cone shape of the Apollo command module but with a larger diameter. The rescue vehicle would be delivered to the Station in the payload bay of the Space Shuttle. The spacecraft design can accommodate six to eight people for a one-day return mission. All of the systems for the mission including deorbit propulsion are contained within the conical spacecraft and so there is no separate service module. The use of the proven Apollo re-entry shape would greatly reduce the time and cost for development and testing. Other aspects of the design are also intended to minimize development cost and simplify operations. This paper will summarize the evolution of rescue vehicle concepts, the functional requirements for a rescue vehicle, and describe the proposed design. 20. Space station synergetic RAM-logistics analysis Science.gov (United States) Dejulio, Edmund T.; Leet, Joel H. 1988-01-01 NASA's Space Station Maintenance Planning and Analysis (MP&A) Study is a step in the overall Space Station Program to define optimum approaches for on-orbit maintenance planning and logistics support. The approach used in the MP&A study and the analysis process used are presented. Emphasis is on maintenance activities and processes that can be accomplished on orbit within the known design and support constraints of the Space Station. From these analyses, recommendations for maintainability/maintenance requirements are established. The ultimate goal of the study is to reduce on-orbit maintenance requirements to a practical and safe minimum, thereby conserving crew time for productive endeavors. The reliability, availability, and maintainability (RAM) and operations performance evaluation models used were assembled and developed as part of the MP&A study and are described. A representative space station system design is presented to illustrate the analysis process. 1. Space Station data management system architecture Science.gov (United States) Mallary, William E.; Whitelaw, Virginia A. 1987-01-01 Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study. 2. Space Station Displays and Controls Technology Evolution Science.gov (United States) Blackburn, Greg C. 1990-01-01 Viewgraphs on space station displays and controls technology evolution are presented. Topics covered include: a historical perspective; major development objectives; current development activities; key technology areas; and technology evolution issues. 3. Predictive Attitude Maintenance For A Space Station Science.gov (United States) Hattis, Philip D. 1989-01-01 Paper provides mathematical basis for predictive management of angular momenta of control-moment gyroscopes (CMG's) to control attitude of orbiting space station. Numerical results presented for pitch control of proposed power-tower space station. Based on prior orbit history and mathematical model of density of atmosphere, predictions made of requirements on dumping and storage of angular momentum in relation to current loading state of CMG's and to acceptable attitude tolerances. 4. Alkaline RFC Space Station prototype - 'Next step Space Station'. [Regenerative Fuel Cells Science.gov (United States) Hackler, I. M. 1986-01-01 The regenerative fuel cell, a candidate technology for the Space Station's energy storage system, is described. An advanced development program was initiated to design, manufacture, and integrate a regenerative fuel cell Space Station prototype (RFC SSP). The RFC SSP incorporates long-life fuel cell technology, increased cell area for the fuel cells, and high voltage cell stacks for both units. The RFC SSP's potential for integration with the Space Station's life support and propulsion systems is discussed. 5. Space power station. Uchu hatsuden Energy Technology Data Exchange (ETDEWEB) Kudo, I. (Electrotechnical Laboratory, Tsukuba (Japan)) 1993-02-20 A calculation tells that the amount of electric power the world will use in the future will require 100 to 500 power plants each with an output of 5-GW class. If this conception is true, it is beyond dispute that utilizing nuclear power will constitute a core of the power generation even though the geographical conditions are severe for nuclear power plants. It is also certain that power generation using clean solar energy will play important roles if power supply stability can be achieved. This paper describes plans to develop space solar power generation and space nuclear power generation that can supply power solving problems concerning geographical conditions and power supply stability. The space solar power generation is a system to arrest solar energy on a static orbit. According to a result of discussions in the U.S.A., the plan calls for solar cell sheets spread over the surface of a structure with a size of 5 km [times] 10 km [times] 0.5 km thick, and electric power obtained therefrom is transmitted to a rectenna with a size of 10 km [times] 13 km, a receiving antenna on the ground. The space nuclear power generation will be constructed similarly on a static orbit. Researches on space nuclear reactors have already begun. 10 refs., 8 figs., 1 tab. 6. International Space Station Systems Engineering. Case Study Science.gov (United States) 2010-01-01 cargo transfer vehicle that is launched on the Ariane V expendable rocket. The first ATV (named Jules Verne ) successfully completed its first...Griffin to the Subcommittee on Space, Aeronautics and Related Sciences, 15 November 2007 63 “ Jules Verne Refuels the International Space Station 7. Large space reflector technology on the Space Station Science.gov (United States) Mankins, J. C.; Dickinson, R. M.; Freeland, R. E.; Marzwell, N. I. 1986-01-01 This paper discusses the role of the Space Station in the evolutionary development of large space reflector technology and the accommodation of mission systems which will apply large space reflectors during the late 1990s and the early part of the next century. Reflectors which range from 10 to 100 meters in size and which span the electromagnetic spectrum for applications that include earth communications, earth observations, astrophysics and solar physics, and deep space communications are discussed. The role of the Space Station in large space reflector technology development and system performance demonstration is found to be critical; that role involves the accommodation of a wide variety of technology demonstrations and operational activities on the Station, including reflector deployment and/or assembly, mechanical performance verification and configuration refinement, systematic diagnostics of reflector surfaces, structural dynamics and controls research, overall system performance characterization and modification (including both radio frequency field pattern measurements and required end-to-end system demonstrations), and reflector-to-spacecraft integration and staging. A unique facility for Space Station-based, large space reflector research and development is proposed. A preliminary concept for such a Space Station-based Large Space Reflector Facility (LSRF) is described. 8. The space station integrated refuse management system Science.gov (United States) Anderson, Loren A. 1988-01-01 The design and development of an Integrated Refuse Management System for the proposed International Space Station was performed. The primary goal was to make use of any existing potential energy or material properties that refuse may possess. The secondary goal was based on the complete removal or disposal of those products that could not, in any way, benefit astronauts' needs aboard the Space Station. The design of a continuous living and experimental habitat in space has spawned the need for a highly efficient and effective refuse management system capable of managing nearly forty-thousand pounds of refuse annually. To satisfy this need, the following four integrable systems were researched and developed: collection and transfer; recycle and reuse; advance disposal; and propulsion assist in disposal. The design of a Space Station subsystem capable of collecting and transporting refuse from its generation site to its disposal and/or recycling site was accomplished. Several methods of recycling or reusing refuse in the space environment were researched. The optimal solution was determined to be the method of pyrolysis. The objective of removing refuse from the Space Station environment, subsequent to recycling, was fulfilled with the design of a jettison vehicle. A number of jettison vehicle launch scenarios were analyzed. Selection of a proper disposal site and the development of a system to propel the vehicle to that site were completed. Reentry into the earth atmosphere for the purpose of refuse incineration was determined to be the most attractive solution. 9. International Space Station lauded, debated at symposium Science.gov (United States) Showstack, Randy Astronauts labored successfully in early December to unfurl solar wings on the International Space Station, which will help make that craft the third-largest object in the night sky as seen from Earth, and help power the station for at least 15 years as a continuous small scientific village in space. While astronauts from the “Endeavor” U.S. space shuttle worked on the solar panels, NASA Administrator Dan Goldin and U.S. House of Representatives Science Committee Chair James Sensenbrenner (R-Wis.) praised the International Space Station (ISS), but exchanged shots across the bow during a December 4 symposium in Washington, D.C.Sensenbrenner, a leading congressional watchdog of the project, said that the United States “should be restructuring relations with Russia on the space station” because of that country's recent, and reportedly short-lived threat to violate the international Missile Technology Control Regime (MTCR). The regime restricts the export of some delivery systems capable of carrying weapons of mass destruction. Sensenbrenner said Russia's recent announcement [of its intention] to break a secret deal not to sell conventional weapons to Iran after January 1, 2001 is a cause for reconsidering the space station working relationship. 10. Artist's Concept of International Space Station (ISS) Science.gov (United States) 2004-01-01 Pictured is an artist's concept of the International Space Station (ISS) with solar panels fully deployed. In addition to the use of solar energy, the ISS will employ at least three types of propulsive support systems for its operation. The first type is to reboost the Station to correct orbital altitude to offset the effects of atmospheric and other drag forces. The second function is to maneuver the ISS to avoid collision with oribting bodies (space junk). The third is for attitude control to position the Station in the proper attitude for various experiments, temperature control, reboost, etc. The ISS, a gateway to permanent human presence in space, is a multidisciplinary laboratory, technology test bed, and observatory that will provide an unprecedented undertaking in scientific, technological, and international experimentation by cooperation of sixteen countries. 11. Toluene stability Space Station Rankine power system Science.gov (United States) Havens, V. N.; Ragaller, D. R.; Sibert, L.; Miller, D. 1987-01-01 A dynamic test loop is designed to evaluate the thermal stability of an organic Rankine cycle working fluid, toluene, for potential application to the Space Station power conversion unit. Samples of the noncondensible gases and the liquid toluene were taken periodically during the 3410 hour test at 750 F peak temperature. The results obtained from the toluene stability loop verify that toluene degradation will not lead to a loss of performance over the 30-year Space Station mission life requirement. The identity of the degradation products and the low rates of formation were as expected from toluene capsule test data. 12. Space station related investigations in Europe Science.gov (United States) Wienss, W.; Vallerain, E. 1984-10-01 Studies pertaining to the definition of Europe's role in the Space Station program are described, with consideration given to such elements as pressurized modules as laboratories for materials processing and life sciences, unpressurized elements, and service vehicles for on-orbit maintenance and repair activities. Candidate elements were selected against such criteria as clean interfaces, the satisfaction of European user needs, new technology items, and European financial capabilities; and their technical and programmatic implications were examined. Different scenarios were considered, ranging from a fully Space-Station-dependent case to a completely autonomous, free-flying man-tendable configuration. Recommendations on a collaboration between Europe and the United States are presented. 13. Evolution of the Space Station Robotic Manipulator Science.gov (United States) Razvi, Shakeel; Burns, Susan H. 2007-01-01 14. Human factors in space station architecture 1: Space station program implications for human factors research Science.gov (United States) Cohen, M. M. 1985-01-01 The space station program is based on a set of premises on mission requirements and the operational capabilities of the space shuttle. These premises will influence the human behavioral factors and conditions on board the space station. These include: launch in the STS Orbiter payload bay, orbital characteristics, power supply, microgravity environment, autonomy from the ground, crew make-up and organization, distributed command control, safety, and logistics resupply. The most immediate design impacts of these premises will be upon the architectural organization and internal environment of the space station. 15. Work/control stations in Space Station weightlessness Science.gov (United States) Willits, Charles 1990-01-01 An ergonomic integration of controls, displays, and associated interfaces with an operator, whose body geometry and dynamics may be altered by the state of weightlessness, is noted to rank in importance with the optimal positioning of controls relative to the layout and architecture of 'body-ported' work/control stations applicable to the NASA Space Station Freedom. A long-term solution to this complex design problem is envisioned to encompass the following features: multiple imaging, virtual optics, screen displays controlled by a keyboard ergonomically designed for weightlessness, cursor control, a CCTV camera, and a hand-controller featuring 'no-grip' vernier/tactile positioning. This controller frees all fingers for multiple-switch actuations, while retaining index/register determination with the hand controller. A single architectural point attachment/restraint may be used which requires no residual muscle tension in either brief or prolonged operation. 16. 47 CFR 25.114 - Applications for space station authorizations. Science.gov (United States) 2010-10-01 ... limited the probability of the space station becoming a source of debris by collisions with small debris... operator has assessed and limited the probability of the space station becoming a source of debris by collisions with large debris or other operational space stations. Where a space station will be launched into... 17. STS-106 Onboard Photograph - International Space Station Science.gov (United States) 2000-01-01 This image of the International Space Station (ISS) was taken when Space Shuttle Atlantis (STS-106 mission) approached the ISS for docking. At the top is the Russian Progress supply ship that is linked with the Russian built Service Module or Zvezda. The Zvezda is cornected with the Russian built Functional Cargo Block (FGB) or Zarya. The U.S. built Node 1 or Unity module is seen at the bottom. 18. Social factors in space station interiors Science.gov (United States) Cranz, Galen; Eichold, Alice; Hottes, Klaus; Jones, Kevin; Weinstein, Linda 1987-01-01 Using the example of the chair, which is often written into space station planning but which serves no non-cultural function in zero gravity, difficulties in overcoming cultural assumptions are discussed. An experimental approach is called for which would allow designers to separate cultural assumptions from logistic, social and psychological necessities. Simulations, systematic doubt and monitored brainstorming are recommended as part of basic research so that the designer will approach the problems of space module design with a complete program. 19. Space Station overall management approach for operations Science.gov (United States) Paules, G. 1986-01-01 An Operations Management Concept developed by NASA for its Space Station Program is discussed. The operational goals, themes, and design principles established during program development are summarized. The major operations functions are described, including: space systems operations, user support operations, prelaunch/postlanding operations, logistics support operations, market research, and cost/financial management. Strategic, tactical, and execution levels of operational decision-making are defined. 20. Orbit keeping attitude control for space station Science.gov (United States) Barrows, D.; Bedell, H. 1983-01-01 It is pointed out that on-orbit configuration variability is expected to be a characteristic of a space station. The implementation of such a chracteristic will present reboost and thruster control system designers with a number of new challenges. The primary requirement for the space station orbit reboost (or orbit keeping) system is to ensure system viability for extended duration and prevent an uncontrolled reentry as with Skylab. For a station in a low earth orbit, earodynamic drag will be sufficient to cause relatively quick orbit altitude decay. A propulsion system is, therefore, needed to counteract the aerodynamic drag forces and to boost the vehicle to the desired orbit altitudes. A description is given of a typical reboost operational procedure and propellant requirements. Attention is given to thruster control systems, and aspects of reboost guidance. 1. Space station design - Innovation and compromise Science.gov (United States) Powell, L. E.; Cohen, A.; Craig, M. 1984-01-01 The NASA manned space station will consist of three main elements: habitable modules, solar collectors, and their interconnecting hardware. The most arduous of the requirements to be met by this configuration is the simultaneous integration of terrestrial, solar, and celestial viewing instruments, since omnidirectional simultaneous viewing is made difficult by the station's large solar energy collection devices. The space station also imposes unique design conditions on individual subsystems, such as the power distribution and energy storage hardware. In particular, the thermal control subsystem must be designed to meet a variety of mission, payload, and housekeeping tasks that demand a large heat rejection capacity. Novel environmental control and life support subsystem technology will be indispensable. 2. Microgravity particle research on the Space Station Energy Technology Data Exchange (ETDEWEB) Squyres, S.W.; Mckay, C.P.; Schwartz, D.E. 1987-12-01 Science questions that could be addressed by a Space Station Microgravity Particle Research Facility for studying small suspended particles were discussed. Characteristics of such a facility were determined. Disciplines covered include astrophysics and the solar nebula, planetary science, atmospheric science, exobiology and life science, and physics and chemistry. 3. Fifteen years of international space station NARCIS (Netherlands) Verhagen, B.; Celebi, T. 2014-01-01 The International Space Station (ISS) celebrated its 15th birthday in October 2013. The ISS is the largest spaceship ever built by humans and very important for research, to understand life and physics. However, the ISS is very expensive to maintain and therefore some people argue that the ISS 4. Space Station technology testbed: 2010 deep space transport Science.gov (United States) Holt, Alan C. 1993-12-01 A space station in a crew-tended or permanently crewed configuration will provide major R&D opportunities for innovative, technology and materials development and advanced space systems testing. A space station should be designed with the basic infrastructure elements required to grow into a major systems technology testbed. This space-based technology testbed can and should be used to support the development of technologies required to expand our utilization of near-Earth space, the Moon and the Earth-to-Jupiter region of the Solar System. Space station support of advanced technology and materials development will result in new techniques for high priority scientific research and the knowledge and R&D base needed for the development of major, new commercial product thrusts. To illustrate the technology testbed potential of a space station and to point the way to a bold, innovative approach to advanced space systems' development, a hypothetical deep space transport development and test plan is described. Key deep space transport R&D activities are described would lead to the readiness certification of an advanced, reusable interplanetary transport capable of supporting eight crewmembers or more. With the support of a focused and highly motivated, multi-agency ground R&D program, a deep space transport of this type could be assembled and tested by 2010. Key R&D activities on a space station would include: (1) experimental research investigating the microgravity assisted, restructuring of micro-engineered, materials (to develop and verify the in-space and in-situ 'tuning' of materials for use in debris and radiation shielding and other protective systems), (2) exposure of microengineered materials to the space environment for passive and operational performance tests (to develop in-situ maintenance and repair techniques and to support the development, enhancement, and implementation of protective systems, data and bio-processing systems, and virtual reality and 5. Emulsion chamber experiments for the Space Station Science.gov (United States) Wilkes, R. J. Emulsion chambers offer several unique features for the study of ultrahigh-energy cosmic-ray interactions and spectra aboard a permanent manned Space Station. Emulsion-chamber experiments provide the highest acceptance/weight ratio of any current experimental technique, are invulnerable to mechanical shocks and temperature excursions associated with space flight, do not employ volatile or explosive components or materials, and are not dependent upon data communications or recording systems. Space-Station personnel would be employed to replace track-sensitive materials as required by background accumulation. Several emulsion-chamber designs are proposed, including both conventional passive calorimetric detectors and a hybrid superconducting-magnetic-spectrometer system. Results of preliminary simulation studies are presented. Operational logistics are discussed. 6. Space station integrated propulsion and fluid systems study. Space station program fluid management systems databook Science.gov (United States) Bicknell, B.; Wilson, S.; Dennis, M.; Lydon, M. 1988-01-01 Commonality and integration of propulsion and fluid systems associated with the Space Station elements are being evaluated. The Space Station elements consist of the core station, which includes habitation and laboratory modules, nodes, airlocks, and trusswork; and associated vehicles, platforms, experiments, and payloads. The program is being performed as two discrete tasks. Task 1 investigated the components of the Space Station architecture to determine the feasibility and practicality of commonality and integration among the various propulsion elements. This task was completed. Task 2 is examining integration and commonality among fluid systems which were identified by the Phase B Space Station contractors as being part of the initial operating capability (IOC) and growth Space Station architectures. Requirements and descriptions for reference fluid systems were compiled from Space Station documentation and other sources. The fluid systems being examined are: an experiment gas supply system, an oxygen/hydrogen supply system, an integrated water system, the integrated nitrogen system, and the integrated waste fluids system. Definitions and descriptions of alternate systems were developed, along with analyses and discussions of their benefits and detriments. This databook includes fluid systems descriptions, requirements, schematic diagrams, component lists, and discussions of the fluid systems. In addition, cost comparison are used in some cases to determine the optimum system for a specific task. 7. Orbit lifetime characteristics for Space Station Science.gov (United States) Deryder, L.; Kelly, G. M.; Heck, M. The factors that influence the orbital lifetime characteristics of the NASA Space Station are discussed. These include altitude, attitude, launch date, ballistic coefficient, and the presence of large articulating solar arrays. Examples from previous program systems studies are presented that illustrate how each factor affects Station orbit lifetime. The effect of atmospheric density models on orbit lifetime predictions is addressed along with the uncertainty of these predictions using current trajectory analysis of the Long Duration Exposure Facility spacecraft. Finally, nominal reboost altitude profiles and fuel requirement considerations are presented for implementing a reboost strategy based on planned Shuttle Orbiter rendezvous strategy and contingency considerations. 8. NASA space station automation: AI-based technology review Science.gov (United States) Firschein, O.; Georgeff, M. P.; Park, W.; Neumann, P.; Kautz, W. H.; Levitt, K. N.; Rom, R. J.; Poggio, A. A. 1985-01-01 Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures. 9. Deployable Debris Shields For Space Station Science.gov (United States) Christiansen, Eric L.; Cour-Palais, Burton G.; Crews, Jeanne 1993-01-01 Multilayer shields made of lightweight sheet materials deployed from proposed Space Station Freedom for additional protection against orbiting debris. Deployment mechanism attached at each location on exterior where extra protection needed. Equipment withdraws layer of material from storage in manner similar to unfurling sail or extending window shade. Number of layers deployed depends on required degree of protection, and could be as large as five. 10. Evolutionary space station fluids management strategies Science.gov (United States) 1989-01-01 Results are summarized for an 11-month study to define fluid storage and handling strategies and requirements for various specific mission case studies and their associated design impacts on the Space Station. There are a variety of fluid users which require a variety of fluids and use rates. Also, the cryogenic propellants required for NASA's STV, Planetary, and Code Z missions are enormous. The storage methods must accommodate fluids ranging from a high pressure gas or supercritical state fluid to a sub-cooled liquid (and superfluid helium). These requirements begin in the year 1994, reach a maximum of nearly 1800 metric tons in the year 2004, and trail off to the year 2018, as currently planned. It is conceivable that the cryogenic propellant needs for the STV and/or Lunar mission models will be met by LTCSF LH2/LO2 tanksets attached to the SS truss structure. Concepts and corresponding transfer and delivery operations have been presented for STV propellant provisioning from the SS. A growth orbit maneuvering vehicle (OMV) and associated servicing capability will be required to move tanksets from delivery launch vehicles to the SS or co-orbiting platforms. Also, appropriate changes to the software used for OMV operation are necessary to allow for the combined operation of the growth OMV. To support fluid management activities at the Space Station for the experimental payloads and propellant provisioning, there must be truss structure space allocated for fluid carriers and propellant tanksets, and substantial beam strengthening may be required. The Station must have two Mobile Remote Manipulator Systems (MRMS) and the growth OMV propellant handling operations for the STV at the SS. Propellant needs for the Planetary Initiatives and Code Z mission models will most likely be provided by co-orbiting propellant platform(s). Space Station impacts for Code Z mission fluid management activities will be minimal. 11. Momentum management strategy during Space Station buildup Science.gov (United States) Bishop, Lynda; Malchow, Harvey; Hattis, Philip 1988-01-01 The use of momentum storage devices to control effectors for Space Station attitude control throughout the buildup sequence is discussed. Particular attention is given to the problem of providing satisfactory management of momentum storage effectors throughout buildup while experiencing variable torque loading. Continuous and discrete control strategies are compared and the effects of alternative control moment gyro strategies on peak momentum storage requirements and on commanded maneuver characteristics are described. 12. Space Station Freedom primary power wiring requirements Science.gov (United States) Hill, Thomas J. 1994-09-01 The Space Station Freedom (SSF) Program requirements are a 30 year reliable service life in low Earth orbit in hard vacuum or pressurized module service without detrimental degradation. Specific requirements are outlined in this presentation for SSF primary power and cable insulation. The primary power cable status and the WP-4 planned cable test program are also reviewed along with Rocketdyne-WP04 prime insulation candidates. 13. Space Station Control Moment Gyroscope Lessons Learned Science.gov (United States) Gurrisi, Charles; Seidel, Raymond; Dickerson, Scott; Didziulis, Stephen; Frantz, Peter; Ferguson, Kevin 2010-01-01 Four 4760 Nms (3510 ft-lbf-s) Double Gimbal Control Moment Gyroscopes (DGCMG) with unlimited gimbal freedom about each axis were adopted by the International Space Station (ISS) Program as the non-propulsive solution for continuous attitude control. These CMGs with a life expectancy of approximately 10 years contain a flywheel spinning at 691 rad/s (6600 rpm) and can produce an output torque of 258 Nm (190 ft-lbf)1. One CMG unexpectedly failed after approximately 1.3 years and one developed anomalous behavior after approximately six years. Both units were returned to earth for failure investigation. This paper describes the Space Station Double Gimbal Control Moment Gyroscope design, on-orbit telemetry signatures and a summary of the results of both failure investigations. The lessons learned from these combined sources have lead to improvements in the design that will provide CMGs with greater reliability to assure the success of the Space Station. These lessons learned and design improvements are not only applicable to CMGs but can be applied to spacecraft mechanisms in general. 14. International Space Station -- Fluid Physics Rack Science.gov (United States) 2000-01-01 The optical bench for the Fluids Integrated Rack section of the Fluids and Combustion Facility (FCF) is shown extracted for servicing. The FCF will be installed, in phases, in the Destiny, the U.S. Laboratory Module of the International Space Station (ISS), and will accommodate multiple users for a range of investigations. This is an engineering mockup; the flight hardware is subject to change as designs are refined. The FCF is being developed by the Microgravity Science Division (MSD) at the NASA Glenn Research Center. (Photo credit: NASA/Marshall Space Flight Center) 15. Physics Research on the International Space Station CERN Multimedia CERN. Geneva 2012-01-01 The International Space Station (ISS) is orbiting Earth at an altitude of around 400 km. It has been manned since November 2000 and currently has a permanent crew of six. On-board ISS science is done in a wide field of sciences, from fundamental physics to biology and human physiology. Many of the experiments utilize the unique conditions of weightlessness, but also the views of space and the Earth are exploited. ESA’s (European Space Agency) ELIPS (European Programme Life and Physical sciences in Space) manages some 150 on-going and planned experiments for ISS, which is expected to be utilized at least to 2020. This presentation will give a short introduction to ISS, followed by an overview of the science field within ELIPS and some resent results. The emphasis, however, will be on ISS experiments which are close to the research performed at CERN. Silicon strip detectors like ALTEA are measuring the flux of ions inside the station. ACES (Atomic Clock Ensemble in Space) will provide unprecedented global ti... 16. Simple Solutions for Space Station Audio Problems Science.gov (United States) Wood, Eric 2016-01-01 Throughout this summer, a number of different projects were supported relating to various NASA programs, including the International Space Station (ISS) and Orion. The primary project that was worked on was designing and testing an acoustic diverter which could be used on the ISS to increase sound pressure levels in Node 1, a module that does not have any Audio Terminal Units (ATUs) inside it. This acoustic diverter is not intended to be a permanent solution to providing audio to Node 1; it is simply intended to improve conditions while more permanent solutions are under development. One of the most exciting aspects of this project is that the acoustic diverter is designed to be 3D printed on the ISS, using the 3D printer that was set up earlier this year. Because of this, no new hardware needs to be sent up to the station, and no extensive hardware testing needs to be performed on the ground before sending it to the station. Instead, the 3D part file can simply be uploaded to the station's 3D printer, where the diverter will be made. 17. Complex researches aboard the international space station Science.gov (United States) Pokhyl, Yu. A. Special Research and Development Bureau SRDB is a general organizer on Ukrainian part of three Ukrainian- Russian joint experiments to be implemented aboard the Russian segment of International Space Station RS-ISS Experiment Material- Friction It is proposed to carry out a series of comparative tribological research under conditions of orbital flight aboard the ISS versus those in on- ground laboratory conditions To meet these objectives there will be employed a special onboard 6-module Space- borne tribometer- facility The on- ground research will be implemented under conditions of laboratory simulation of Space environmental factors Results thus obtained would enable one to forecast a behavior of friction pairs as well as functional safety and lifetime of the space- vehicle This experiment will also enable us determine an adequacy of tribological results obtained under conditions of outer Space and on- ground simulation Experiment Penta- Fatigue It is proposed to develop fabricate and deliver aboard the RS-ISS a facility intended for studies of SEF- influence on characteristics of metallic and polymeric materials resistance to fatigue destruction Such a project to be implemented in outer Space for the first ever time would enable us to estimate the parameter of cosmic lifetime for constructional materials due to such mechanical characteristic as fatigue strength so as to enable selection of specific sorts of constructional materials appropriate to service in Space technologies At the same time 18. International Space Station Video Progress Report Science.gov (United States) 2000-01-01 A narrated overview of the construction and assembly of the International Space Station (ISS) is given through a collection of clips ranging from the launch of the Russian Proton rocket containing the Zvezda module to computerized animations showing the installation of the Zarya and Unity connecting modules. Footage from some of the space missions that assembled the ISS in space (i.e., STS-106 and STS-92) are seen. The Z1 truss (including the deployment of the solar arrays), Destiny Laboratory Module, Leonardo Module, the Japanese Kibo Experiment Module, Columbus Pressurized Module, and the ISS's robotic arm are seen. Animations show the assembly and evolution of the ISS as new components are added. 19. Space Station Freedom - A resource for aerospace education Science.gov (United States) Brown, Robert W. 1988-01-01 The role of the International Space Station in future U.S. aerospace education efforts is discussed from a NASA perspective. The overall design concept and scientific and technological goals of the Space Station are reviewed, and particular attention is given to education projects such as the Davis Planetarium Student Space Station, the Starship McCullough, the Space Habitat, the working Space Station model in Austin, TX, the Challenger Center for Space Life Education, Space M+A+X, and the Space Science Student Involvement Program. Also examined are learning-theory aspects of aerospace education: child vs adult learners, educational objectives, teaching methods, and instructional materials. 20. Space station needs, attributes and architectural options: Study summary Science.gov (United States) 1983-01-01 Space station needs, attributes, and architectural options that affect the future implementation and design of a space station system are examined. Requirements for candidate missions are used to define functional attributes of a space station. Station elements that perform these functions form the basic station architecture. Alternative ways to accomplish these functions are defined and configuration concepts are developed and evaluated. Configuration analyses are carried to the point that budgetary cost estimates of alternate approaches could be made. Emphasis is placed on differential costs for station support elements and benefits that accrue through use of the station. 1. Space Station GPS Multipath Analysis and Validation Science.gov (United States) Hwu, Shian U.; Loh, Y. C. 1999-01-01 To investigate the multipath effects on the International Space Station (ISS) Global Positioning System (GPS) measurement accuracy, experimental and computational investigations were performed to estimate the carrier phase errors due to multipath. A new modeling approach is used to reduce the required computing time by separating the dynamic structure elements from the static structure elements in the multipath computations. This study confirmed that the multipath is a major error source to the ISS GPS performance and can possibly degrade the attitude determination solution. It is demonstrated that the GPS antenna carrier phase errors due to multipath can be analyzed using the electromagnetic modeling technique such as the Uniform Geometrical Theory of Diffraction (UTD). 2. International Space Station Configuration Analysis and Integration Science.gov (United States) Anchondo, Rebekah 2016-01-01 Ambitious engineering projects, such as NASA's International Space Station (ISS), require dependable modeling, analysis, visualization, and robotics to ensure that complex mission strategies are carried out cost effectively, sustainably, and safely. Learn how Booz Allen Hamilton's Modeling, Analysis, Visualization, and Robotics Integration Center (MAVRIC) team performs engineering analysis of the ISS Configuration based primarily on the use of 3D CAD models. To support mission planning and execution, the team tracks the configuration of ISS and maintains configuration requirements to ensure operational goals are met. The MAVRIC team performs multi-disciplinary integration and trade studies to ensure future configurations meet stakeholder needs. 3. The MSFC space station/space operations mechanism test bed Science.gov (United States) Sutton, William G.; Tobbe, Patrick A. The Space Station/Space Operations Mechanism Test Bed consists of the following: a hydraulically driven, computer controlled Six Degree-of-Freedom Motion System (6DOF); a six degree-of-freedom force and moment sensor; remote driving stations with computer generated or live TV graphics; and a parallel digital processor that performs calculations to support the real time simulation. The function of the Mechanism Test Bed is to test docking and berthing mechanisms for Space Station Freedom and other orbiting space vehicles in a real time, hardware-in-the-loop simulation environment. Typically, the docking and berthing mechanisms are composed of two mating components, one for each vehicle. In the facility, one component is attached to the motion system, while the other component is mounted to the force/moment sensor fixed in the support structure above the 6DOF. The six components of the contact forces/moments acting on the test article and its mating component are measured by the force/moment sensor. 4. Microbial Monitoring of the International Space Station Science.gov (United States) Pierson, Duane L.; Botkin, Douglas J.; Bruce, Rebekah J.; Castro, Victoria A.; Smith, Melanie J.; Oubre, Cherie M.; Ott, C. Mark 2013-01-01 microbial growth. Air filtration can dramatically reduce the number of airborne bacteria, fungi, and particulates in spacecraft breathing air. Waterborne bacteria can be reduced to acceptable levels by thermal inactivation of bacteria during water processing, along with a residual biocide, and filtration at the point of use can ensure safety. System design must include onboard capability to achieve recovery of the system from contamination. Robust housekeeping procedures that include periodic cleaning and disinfection will prevent high levels of microbial growth on surfaces. Food for consumption in space must be thoroughly tested for excessive microbial content and pathogens before launch. Thorough preflight examination of flight crews, consumables, payloads, and the environment can greatly reduce pathogens in spacecraft. Many of the lessons learned from the Space Shuttle and previous programs were applied in the early design phase of the International Space Station, resulting in the safest space habitat to date. This presentation describes the monitoring program for the International Space Station and will summarize results from preflight and on-orbit monitoring. 5. Plasma contactor technology for Space Station Freedom Science.gov (United States) Patterson, Michael J.; Hamley, John A.; Sarver-Verhey, Timothy; Soulas, George C.; Parkes, James; Ohlinger, Wayne L.; Schaffner, Michael S.; Nelson, Amy 1993-01-01 Hollow cathode plasma contactors have been baselined for Space Station Freedom to control the electrical potentials of surfaces to eliminate/mitigate damaging interactions with the space environment. The system represents a dual-use technology which is a direct outgrowth of the NASA electric propulsion program and in particular the technology development effort on ion thruster systems. Specific efforts include optimizing the design and configuration of the contactor, validating its required lifetime, and characterizing the contactor plume and electromagnetic interference. The plasma contactor subsystems include the plasma contactor unit, a power electronics unit, and an expellant management unit. Under this program these will all be brought to breadboard and engineering model development status. New test facilities have been developed, and existing facilities have been augmented, to support characterizations and life testing of contactor components and systems. This paper discusses the magnitude, scope, and status of the plasma contactor hardware development program now under way and preliminary test results on system components. 6. Plasma contactor development for Space Station Science.gov (United States) Patterson, Michael J.; Hamley, John A.; Sarmiento, Charles J.; Manzella, David H.; Sarver-Verhey, Timothy; Soulas, George C.; Nelson, Amy 1993-01-01 Plasma contactors have been baselined for the Space Station (SS) to control the electrical potentials of surfaces to eliminate/mitigate damaging interactions with the space environment. The system represents a dual-use technology which is a direct outgrowth of the NASA electric propulsion program and, in particular, the technology development effort on ion thrustor systems. The plasma contactor subsystems include the plasma contactor unit, a power electronics unit, and an expellant management unit. Under this pre-flight development program these will all be brought to breadboard or engineering model status. Development efforts for the plasma contactor include optimizing the design and configuration of the contactor, validating its required lifetime, and characterizing the contactor plume and electromagnetic interference. The plasma contactor unit design selected for the SS is an enclosed keeper, xenon hollow cathode plasma source. This paper discusses the test results and development status of the plasma contactor unit subsystem for the SS. 7. Now calling at the International Space Station CERN Multimedia Katarina Anthony 2012-01-01 On 31 July, an unmanned Russian Progress spacecraft was launched from the desert steppe of Kazakhstan. Its destination: the International Space Station (ISS). On board: five Timepix detectors developed by the Medipix2 Collaboration.   With the Timepix on board, Progress 48 was launched 31 July from the Baikonur Cosmodrome in Kazakhstan. Source: RSC Energia. Timepix detectors are small, USB powered particle trackers based on Medipix2 technology. The Timepix chip, which was developed at CERN, is coupled to a silicon sensor and incorporated into a minature readout system - developed at IEAP, Prague - which is about the size of a USB pen drive. These systems have been used across a variety of disciplines: from the study of cosmic rays to biomedical imaging. Now on board the ISS, they are providing highly accurate measurements of space radiation for dosimetry purposes. “There’s nothing else in the world that has quite the capability of Timepix detectors to ... 8. Problems and concepts of space station guidance, navigation, and control Science.gov (United States) Guha, A. K.; Craig, M. The Space Station System is defined as a network of space and ground assets which work together to support a variety of missions including commercial missions, science and applications missions, and technology development missions. The elements of the Space Station System include a Space Station Base, Space Platforms, Free Flyers, a Teleoperator Manuevering System (TMS), Orbital Transfer Vehicles (OTV), Orbiter Berthing Equipment, and Ground Support Equipment and Facilities. Guidance, navigation, and control (GNC) subsystem requirements are considered along with configuration trades. 9. International Space Station Lithium-Ion Battery Science.gov (United States) Dalton, Penni J.; Schwanbeck, Eugene; North, Tim; Balcer, Sonia 2016-01-01 The International Space Station (ISS) primary Electric Power System (EPS) currently uses Nickel-Hydrogen (Ni-H2) batteries to store electrical energy. The electricity for the space station is generated by its solar arrays, which charge batteries during insolation for subsequent discharge during eclipse. The Ni-H2 batteries are designed to operate at a 35 depth of discharge (DOD) maximum during normal operation in a Low Earth Orbit. Since the oldest of the 48 Ni-H2 battery Orbital Replacement Units (ORUs) has been cycling since September 2006, these batteries are now approaching their end of useful life. In 2010, the ISS Program began the development of Lithium-Ion (Li-Ion) batteries to replace the Ni-H2 batteries and concurrently funded a Li-Ion ORU and cell life testing project. When deployed, they will be the largest Li-Ion batteries ever utilized for a human-rated spacecraft. This paper will include an overview of the ISS Li-Ion battery system architecture, the Li-Ion battery design and development, controls to limit potential hazards from the batteries, and the status of the Li-Ion cell and ORU life cycle testing. 10. Space station interior design: Results of the NASA/AIA space station interior national design competition Science.gov (United States) Haines, R. F. 1975-01-01 The results of the NASA/AIA space station interior national design competition held during 1971 are presented in order to make available to those who work in the architectural, engineering, and interior design fields the results of this design activity in which the interiors of several space shuttle size modules were designed for optimal habitability. Each design entry also includes a final configuration of all modules into a complete space station. A brief history of the competition is presented with the competition guidelines and constraints. The first place award entry is presented in detail, and specific features from other selected designs are discussed. This is followed by a discussion of how some of these design features might be applied to terrestrial as well as space situations. 11. The Bus Station Spacing Optimization Based on Game Theory Directory of Open Access Journals (Sweden) Changjiang Zheng 2015-01-01 Full Text Available With the development of city, the problem of traffic is becoming more and more serious. Developing public transportation has become the key to solving this problem in all countries. Based on the existing public transit network, how to improve the bus operation efficiency, and reduce the residents transit trip cost has become a simple and effective way to develop the public transportation. Bus stop spacing is an important factor affecting passengers’ travel time. How to set up bus stop spacing has become the key to reducing passengers’ travel time. According to comprehensive traffic survey, theoretical analysis, and summary of urban public transport characteristics, this paper analyzes the impact of bus stop spacing on passenger in-bus time cost and out-bus time cost and establishes in-bus time and out-bus time model. Finally, the paper gets the balance best station spacing by introducing the game theory. 12. Role of the Space Station in Private Development of Space Science.gov (United States) Uhran, M. L. 2002-01-01 The International Space Station (ISS) is well underway in the assembly process and progressing toward completion. In February 2001, the United States laboratory "Destiny" was successfully deployed and the course of space utilization, for laboratory-based research and development (R&D) purposes, entered a new era - continuous on-orbit operations. By completion, the ISS complex will include pressurized laboratory elements from Europe, Japan, Russia and the U.S., as well as external platforms which can serve as observatories and technology development test beds serviced by a Canadian robotic manipulator. The international vision for a continuously operating, full service R&D complex in the unique environment of low-Earth orbit is becoming increasingly focused. This R&D complex will offer great opportunities for economic return as the basic research program proceeds on a global scale and the competitive advantages of the microgravity and ultravacuum environments are elucidated through empirical studies. In parallel, the ISS offers a new vantage point, both as a source for viewing of Earth and the Cosmos and as the subject of view for a global population that has grown during the dawning of the space age. In this regard, the ISS is both a working laboratory and a powerful symbol for human achievement in science and technology. Each of these aspects bears consideration as we seek to develop the beneficial attributes of space and pursue innovative approaches to expanding this space complex through private investment. Ultimately, the success of the ISS will be measured by the outcome at the end of its design lifetime. Will this incredible complex be de-orbited in a fiery finale, as have previous space platforms? Will another, perhaps still larger, space station be built through global government funding? Will the ISS ownership be transferred to a global, non-government organization for refurbishment and continuation of the mission on a privately financed basis? Steps taken 13. International Space Station (ISS) SERVIR Environmental Research and Visualization System Photos: 2013-2014 Data.gov (United States) U.S. Geological Survey, Department of the Interior — The ISS SERVIR Environmental Research and Visualization System (ISERV) acquired images of the Earth's surface from the International Space Station (ISS). The goal... 14. 78 FR 49296 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2013-08-13 ... Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces a meeting of the NASA International Space Station (ISS) Advisory Committee. The purpose... SPACE ADMINISTRATION NASA International Space Station Advisory Committee; Meeting AGENCY: National... 15. 77 FR 66082 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2012-11-01 ... Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces an open meeting of the NASA International Space Station (ISS) Advisory Committee. The... SPACE ADMINISTRATION NASA International Space Station Advisory Committee; Meeting AGENCY: National... 16. 77 FR 41203 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2012-07-12 ... Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces an open meeting of the NASA International Space Station (ISS) Advisory Committee. The... SPACE ADMINISTRATION NASA International Space Station Advisory Committee; Meeting AGENCY: National... 17. 75 FR 51852 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2010-08-23 ..., Public Law 92-463, as amended, the National Aeronautics and Space Administration announces an open meeting of the NASA International Space Station Advisory Committee. The purpose of the meeting is to... International Space Station Advisory Committee; Meeting AGENCY: National Aeronautics and Space Administration... 18. 77 FR 2765 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2012-01-19 ... Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces an open meeting of the NASA International Space Station (ISS) Advisory Committee. The... SPACE ADMINISTRATION NASA International Space Station Advisory Committee; Meeting AGENCY: National... 19. 78 FR 77502 - NASA International Space Station Advisory Committee; Meeting Science.gov (United States) 2013-12-23 ... Federal Advisory Committee Act, Public Law 92-463, as amended, the National Aeronautics and Space Administration announces a meeting of the NASA International Space Station (ISS) Advisory Committee. The purpose... SPACE ADMINISTRATION NASA International Space Station Advisory Committee; Meeting AGENCY: National... 20. Space Station evolution study oxygen loop closure Science.gov (United States) Wood, M. G.; Delong, D. 1993-01-01 In the current Space Station Freedom (SSF) Permanently Manned Configuration (PMC), physical scars for closing the oxygen loop by the addition of oxygen generation and carbon dioxide reduction hardware are not included. During station restructuring, the capability for oxygen loop closure was deferred to the B-modules. As such, the ability to close the oxygen loop in the U.S. Laboratory module (LAB A) and the Habitation A module (HAB A) is contingent on the presence of the B modules. To base oxygen loop closure of SSF on the funding of the B-modules may not be desirable. Therefore, this study was requested to evaluate the necessary hooks and scars in the A-modules to facilitate closure of the oxygen loop at or subsequent to PMC. The study defines the scars for oxygen loop closure with impacts to cost, weight and volume and assesses the effects of byproduct venting. In addition, the recommended scenarios for closure with regard to topology and packaging are presented. 1. International Space Station Acoustics - A Status Report Science.gov (United States) Allen, Christopher S. 2015-01-01 It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, alarm audibility, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analyses and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will reveal changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations and is an update to the status presented in 2011. Since this last status report, many payloads (science experiment hardware) have been added and a significant number of quiet ventilation fans have replaced noisier fans in the Russian Segment. Also, noise mitigation efforts are planned to reduce the noise levels of the T2 treadmill and levels in Node 3, in general. As a result, the acoustic levels on the ISS continue to improve. 2. MISS- Mice on International Space Station Science.gov (United States) Falcetti, G. C.; Schiller, P. 2005-08-01 The use of rodents for scientific research to bridge the gap between cellular biology and human physiology is a new challenge within the history of successful developments of biological facilities. The ESA funded MISS Phase A/B study is aimed at developing a design concept for an animal holding facility able to support experimentation with mice on board the International Space Station (ISS).The MISS facility is composed of two main parts:1. The MISS Rack to perform scientific experiments onboard the ISS.2. The MISS Animals Transport Container (ATC) totransport animals from ground to orbit and vice- versa.The MISS facility design takes into account guidelines and recommendations used for mice well-being in ground laboratories. A summary of the MISS Rack and MISS ATC design concept is hereafter provided. 3. The space station tethered elevator system Science.gov (United States) Anderson, Loren A. 1989-01-01 The optimized conceptual engineering design of a space station tethered elevator is presented. The elevator is an unmanned mobile structure which operates on a ten kilometer tether spanning the distance between the Space Station and a tethered platform. Elevator capabilities include providing access to residual gravity levels, remote servicing, and transportation to any point along a tether. The potential uses, parameters, and evolution of the spacecraft design are discussed. Engineering development of the tethered elevator is the result of work conducted in the following areas: structural configurations; robotics, drive mechanisms; and power generation and transmission systems. The structural configuration of the elevator is presented. The structure supports, houses, and protects all systems on board the elevator. The implementation of robotics on board the elevator is discussed. Elevator robotics allow for the deployment, retrieval, and manipulation of tethered objects. Robotic manipulators also aid in hooking the elevator on a tether. Critical to the operation of the tethered elevator is the design of its drive mechanisms, which are discussed. Two drivers, located internal to the elevator, propel the vehicle along a tether. These modular components consist of endless toothed belts, shunt-wound motors, regenerative power braking, and computer controlled linear actuators. The designs of self-sufficient power generation and transmission systems are reviewed. Thorough research indicates all components of the elevator will operate under power provided by fuel cells. The fuel cell systems will power the vehicle at seven kilowatts continuously and twelve kilowatts maximally. A set of secondary fuel cells provides redundancy in the unlikely event of a primary system failure. Power storage exists in the form of Nickel-Hydrogen batteries capable of powering the elevator under maximum loads. 4. Solar dynamic power for Space Station Freedom Science.gov (United States) Labus, Thomas L.; Secunde, Richard R.; Lovely, Ronald G. 1989-01-01 The Space Station Freedom Program is presently planned to consist of two phases. At the completion of Phase 1, Freedom's manned base will consist of a transverse boom with attached manned modules and 75 kW of available electric power supplied by photovoltaic (PV) power sources. In Phase 2, electric power available to the manned base will be increased to 125 kW by the addition of two solar dynamic (SD) power modules, one at each end of the transverse boom. Power for manned base growth beyond Phase 2 will be supplied by additional SD modules. Studies show that SD power for the growth eras will result in life cycle cost savings of $3 to$4 billion when compared to PV-supplied power. In the SD power modules for Space Station Freedom, an offset parabolic concentrator collects and focuses solar energy into a heat receiver. To allow full power operation over the entire orbit, the receiver includes integral thermal energy storage by means of the heat of fusion of a salt mixture. Thermal energy is removed from the receiver and converted to electrical energy by a power conversion unit (PCU) which includes a closed brayton cycle (CBC) heat engine and an alternator. The receiver/PCU/radiator combination will be completely assembled and charged with gas and cooling fluid on earth before launch to orbit. The concentrator subassemblies will be pre-aligned and stowed in the orbiter bay before launch. On orbit, the receiver/PCU/radiator assembly will be installed as a unit. The pre-aligned concentrator panels will then be latched together and the total concentrator attached to the receiver/PCU/radiator by the astronauts. After final electric connections are made and checkout is complete, the SD power module will be ready for operation. 5. Servicing capability for the evolutionary Space Station Science.gov (United States) Thomas, Edward F.; Grems, Edward G., III; Corbo, James E. 1990-01-01 6. International Space Station External Contamination Environment for Space Science Utilization Science.gov (United States) Soares, Carlos E.; Mikatarian, Ronald R.; Steagall, Courtney A.; Huang, Alvin Y.; Koontz, Steven; Worthy, Erica 2014-01-01 The International Space Station (ISS) is the largest and most complex on-orbit platform for space science utilization in low Earth orbit. Multiple sites for external payloads, with exposure to the associated natural and induced environments, are available to support a variety of space science utilization objectives. Contamination is one of the induced environments that can impact performance, mission success and science utilization on the vehicle. The ISS has been designed, built and integrated with strict contamination requirements to provide low levels of induced contamination on external payload assets. This paper addresses the ISS induced contamination environment at attached payload sites, both at the requirements level as well as measurements made on returned hardware, and contamination forecasting maps being generated to support external payload topology studies and science utilization. 7. The Biotechnology Facility for International Space Station Science.gov (United States) Goodwin, Thomas; Lundquist, Charles; Hurlbert, Katy; Tuxhorn, Jennifer 2004-01-01 The primary mission of the Cellular Biotechnology Program is to advance microgravity as a tool in basic and applied cell biology. The microgravity environment can be used to study fundamental principles of cell biology and to achieve specific applications such as tissue engineering. The Biotechnology Facility (BTF) will provide a state-of-the-art facility to perform cellular biotechnology research onboard the International Space Station (ISS). The BTF will support continuous operation, which will allow performance of long-duration experiments and will significantly increase the on-orbit science throughput. With the BTF, dedicated ground support, and a community of investigators, the goals of the Cellular Biotechnology Program at Johnson Space Center are to: Support approximately 400 typical investigator experiments during the nominal design life of BTF (10 years). Support a steady increase in investigations per year, starting with stationary bioreactor experiments and adding rotating bioreactor experiments at a later date. Support at least 80% of all new cellular biotechnology investigations selected through the NASA Research Announcement (NRA) process. Modular components - to allow sequential and continuous experiment operations without cross-contamination Increased cold storage capability (+4 C, -80 C, -180 C). Storage of frozen cell culture inoculum - to allow sequential investigations. Storage of post-experiment samples - for return of high quality samples. Increased number of cell cultures per investigation, with replicates - to provide sufficient number of samples for data analysis and publication of results in peer-reviewed scientific journals. 8. The opportunities for space biology research on the Space Station Science.gov (United States) Ballard, Rodney W.; Souza, Kenneth A. 1987-01-01 The life sciences research facilities for the Space Station are being designed to accommodate both animal and plant specimens for long durations studies. This will enable research on how living systems adapt to microgravity, how gravity has shaped and affected life on earth, and further the understanding of basic biological phenomena. This would include multigeneration experiments on the effects of microgravity on the reproduction, development, growth, physiology, behavior, and aging of organisms. To achieve these research goals, a modular habitat system and on-board variable gravity centrifuges, capable of holding various animal, plant, cells and tissues, is proposed for the science laboratory. 9. STS-100 Onboard Photograph-International Space Station Science.gov (United States) 2001-01-01 Backdropped against the blue and white Earth, and sporting a readily visible new addition in the form of the Canadarm2 or Space Station Remote Manipulator System (SSRMS), the International Space Station was photographed following separation from the Space Shuttle Endeavour. 10. Omics Research on the International Space Station Science.gov (United States) Love, John 2015-01-01 The International Space Station (ISS) is an orbiting laboratory whose goals include advancing science and technology research. Completion of ISS assembly ushered a new era focused on utilization, encompassing multiple disciplines such as Biology and Biotechnology, Physical Sciences, Technology Development and Demonstration, Human Research, Earth and Space Sciences, and Educational Activities. The research complement planned for upcoming ISS Expeditions 45&46 includes several investigations in the new field of omics, which aims to collectively characterize sets of biomolecules (e.g., genomic, epigenomic, transcriptomic, proteomic, and metabolomic products) that translate into organismic structure and function. For example, Multi-Omics is a JAXA investigation that analyzes human microbial metabolic cross-talk in the space ecosystem by evaluating data from immune dysregulation biomarkers, metabolic profiles, and microbiota composition. The NASA OsteoOmics investigation studies gravitational regulation of osteoblast genomics and metabolism. Tissue Regeneration uses pan-omics approaches with cells cultured in bioreactors to characterize factors involved in mammalian bone tissue regeneration in microgravity. Rodent Research-3 includes an experiment that implements pan-omics to evaluate therapeutically significant molecular circuits, markers, and biomaterials associated with microgravity wound healing and tissue regeneration in bone defective rodents. The JAXA Mouse Epigenetics investigation examines molecular alterations in organ specific gene expression patterns and epigenetic modifications, and analyzes murine germ cell development during long term spaceflight. Lastly, Twins Study ("Differential effects of homozygous twin astronauts associated with differences in exposure to spaceflight factors"), NASA's first foray into human omics research, applies integrated analyses to assess biomolecular responses to physical, physiological, and environmental stressors associated 11. Epigenetics Research on the International Space Station Science.gov (United States) Love, John; Cooley, Vic 2016-01-01 The International Space Station (ISS) is a state-of-the orbiting laboratory focused on advancing science and technology research. Experiments being conducted on the ISS include investigations in the emerging field of Epigenetics. Epigenetics refers to stably heritable changes in gene expression or cellular phenotype (the transcriptional potential of a cell) resulting from changes in a chromosome without alterations to the underlying DNA nucleotide sequence (the genetic code), which are caused by external or environmental factors, such as spaceflight microgravity. Molecular mechanisms associated with epigenetic alterations regulating gene expression patterns include covalent chemical modifications of DNA (e.g., methylation) or histone proteins (e.g., acetylation, phorphorylation, or ubiquitination). For example, Epigenetics ("Epigenetics in Spaceflown C. elegans") is a recent JAXA investigation examining whether adaptations to microgravity transmit from one cell generation to another without changing the basic DNA of the organism. Mouse Epigenetics ("Transcriptome Analysis and Germ-Cell Development Analysis of Mice in Space") investigates molecular alterations in organ-specific gene expression patterns and epigenetic modifications, and analyzes murine germ cell development during long term spaceflight, as well as assessing changes in offspring DNA. NASA's first foray into human Omics research, the Twins Study ("Differential effects of homozygous twin astronauts associated with differences in exposure to spaceflight factors"), includes investigations evaluating differential epigenetic effects via comprehensive whole genome analysis, the landscape of DNA and RNA methylation, and biomolecular changes by means of longitudinal integrated multi-omics research. And the inaugural Genes in Space student challenge experiment (Genes in Space-1) is aimed at understanding how epigenetics plays a role in immune system dysregulation by assaying DNA methylation in immune cells 12. International Space Station Water Balance Operations Science.gov (United States) Tobias, Barry; Garr, John D., II; Erne, Meghan 2011-01-01 In November 2008, the Water Regenerative System racks were launched aboard Space Shuttle flight, STS-126 (ULF2) and installed and activated on the International Space Station (ISS). These racks, consisting of the Water Processor Assembly (WPA) and Urine Processor Assembly (UPA), completed the installation of the Regenerative (Regen) Environmental Control and Life Support Systems (ECLSS), which includes the Oxygen Generation Assembly (OGA) that was launched 2 years prior. With the onset of active water management on the US segment of the ISS, a new operational concept was required, that of water balance . In November of 2010, the Sabatier system, which converts H2 and CO2 into water and methane, was brought on line. The Regen ECLSS systems accept condensation from the atmosphere, urine from crew, and processes that fluid via various means into potable water, which is used for crew drinking, building up skip-cycle water inventory, and water for electrolysis to produce oxygen. Specification (spec) rates of crew urine output, condensate output, O2 requirements, toilet flush water, and drinking needs are well documented and used as the best guess planning rates when Regen ECLSS came online. Spec rates are useful in long term planning, however, daily or weekly rates are dependent upon a number of variables. The constantly changing rates created a new challenge for the ECLSS flight controllers, who are responsible for operating the ECLSS systems onboard ISS from Mission Control in Houston. This paper reviews the various inputs to water planning, rate changes, and dynamic events, including but not limited to: crew personnel makeup, Regen ECLSS system operability, vehicle traffic, water storage availability, and Carbon Dioxide Removal Assembly (CDRA), Sabatier, and OGA capability. Along with the inputs that change the various rates, the paper will review the different systems, their constraints, and finally the operational challenges and means by which flight controllers 13. International Space Station Crew Restraint Design Science.gov (United States) Whitmore, M.; Norris, L.; Holden, K. 2005-01-01 14. Return from space: from the International Space Station to CERN CERN Multimedia 2012-01-01 On 16 May 2011, the space shuttle Endeavour took off for the last time from Cape Canaveral in Florida with six astronauts on board. Their mission (code-named STS-134) was to install the Alpha Magnetic Spectrometer (AMS), the dark matter and antimatter detector designed at CERN, on the International Space Station. Since then, AMS has been sending data to CERN from space.   On Wednesday 25 July do not miss a rare opportunity to meet the mission’s six astronauts at CERN: Mark E. Kelly, commander (NASA) Greg H. Johnson, pilot (NASA) and the mission’s specialists: Michael Fincke (NASA) Roberto Vittori (ESA and ASI) Andrew J. Feustel (NASA) Greg Chamitoff (NASA) 4:20 pm: the event will kick off with a photo and autograph session at the Globe of Science and Innovation. 5 pm: lecture given by the astronauts for CERN personnel and summer students in the Main Auditorium. (Seats reserved for the summer students - contact: summer.student.info@cern.ch). ... 15. International Space Station: Meteoroid/Orbital Debris Survivability and Vulnerability Science.gov (United States) Graves, Russell 2000-01-01 This slide presentation reviews the surviability and vulnerability of the International Space Station (ISS) from the threat posed by meteoroid and orbital debris. The topics include: (1) Space station natural and induced environments (2) Meteoroid and orbital debris threat definition (3) Requirement definition (4) Assessment methods (5) Shield development and (6) Component vulnerability 16. Thermal Radiator Pointing for International Space Station Science.gov (United States) Green, Scott 1999-01-01 17. Levitation Technology in International Space Station Research Science.gov (United States) Guinart-Ramirez, Y.; Cooley, V. M.; Love, J. E. 2016-01-01 The International Space Station (ISS) is a unique multidisciplinary orbiting laboratory for science and technology research, enabling discoveries that benefit life on Earth and exploration of the universe. ISS facilities for containerless sample processing in Materials Science experiments include levitation devices with specimen positioning control while reducing containment vessel contamination. For example, ESA's EML (ElectroMagnetic Levitator), is used for melting and solidification of conductive metals, alloys, or semiconductors in ultra-high vacuum, or in high-purity gaseous atmospheres. Sample heating and positioning are accomplished through electromagnetic fields generated by a coil system. EML applications cover investigation of solidification and microstructural formation, evaluation of thermophysical properties of highly reactive metals (whose properties can be very sensitive to contamination), and examination of undercooled liquid metals to understand metastable phase convection and influence convection on structural changes. MSL utilization includes development of novel light-weight, high-performance materials. Another facility, JAXA's ELF (Electrostatic Levitation Furnace), is used to perform high temperature melting while avoiding chemical reactions with crucibles by levitating a sample through Coulomb force. ELF is capable of measuring density, surface tension, and viscosity of samples at high temperatures. One of the initial ELF investigations, Interfacial Energy-1, is aimed at clarification of interfacial phenomena between molten steels and oxide melts with industrial applications in control processes for liquid mixing. In addition to these Materials Science facilities, other ISS investigations that involve levitation employ it for biological research. For example, NASA's "Magnetic 3D Culturing and Bioprinting" investigation uses magnetic levitation for three-dimensional culturing and positioning of magnetized cells to generate spheroid assemblies 18. Supercritical water oxidation - Concept analysis for evolutionary Space Station application Science.gov (United States) Hall, John B., Jr.; Brewer, Dana A. 1986-01-01 The ability of a supercritical water oxidation (SCWO) concept to reduce the number of processes needed in an evolutionary Space Station design's Environmental Control and Life Support System (ECLSS), while reducing resupply requirements and enhancing the integration of separate ECLSS functions into a single Supercritical Water Oxidation process, is evaluated. While not feasible for an initial operational capability Space Station, the SCWO's application to the evolutionary Space Station configuration would aid the integration of eight ECLSS functions into a single one, thereby significantly reducing program costs. 19. Electrochemical CO2 concentration for the Space Station Program Science.gov (United States) Lance, N.; Schwartz, M.; Boyda, R. B. 1985-01-01 Under the sponsorship of NASA, Electrochemical Carbon Dioxide (CO2) Concentration EDC technology has been developed that removes CO2 continuously or cyclically from low CO2 partial pressure (400 Pa) atmospheres with the performance and operating characteristics required for Space Station applications. The most recent advancement of this technology is the development of an advanced preprototype subsystem, the CS-3A, to remove the metabolic CO2 produced by three persons from the projected Space Station atmosphere. This paper provides an overview of EDC technology, shows how it is ideally suited for Space Station application, and presents technology enhancements that will be demonstrated by the CS-3A subsystem development program. 20. The space station and human productivity: An agenda for research Science.gov (United States) Schoonhoven, C. B. 1985-01-01 Organizational problems in permanent organizations in outer space were analyzed. The environment of space provides substantial opportunities for organizational research. Questions about how to organize professional workers in a technologically complex setting with novel dangers and uncertainties present in the immediate environment are examined. It is suggested that knowledge from organization theory/behavior is an underutilized resource in the U.S. space program. A U.S. space station will be operable by the mid-1990's. Organizational issues will take on increasing importance, because a space station requires the long term organization of human and robotic work in the isolated and confined environment of outer space. When an organizational analysis of the space station is undertaken, there are research implications at multiple levels of analysis: for the individual, small group, organizational, and environmental levels of analysis. The research relevant to organization theory and behavior is reviewed. 1. Physics of Colloids in Space: Microgravity Experiment Launched, Installed, and Activated on the International Space Station Science.gov (United States) Doherty, Michael P. 2002-01-01 The Physics of Colloids in Space (PCS) experiment is a Microgravity Fluids Physics investigation that is presently located in an Expedite the Process of Experiments to Space Station (EXPRESS) Rack on the International Space Station. PCS was launched to the International Space Station on April 19, 2001, activated on May 31, 2001, and will continue to operate about 90 hr per week through May 2002. 2. Veggie: Space Vegetables for the International Space Station and Beyond Science.gov (United States) Massa, Gioia D. 2016-01-01 The Veggie vegetable production system was launched to the International Space Station (ISS) in 2014. Veggie was designed by ORBITEC to be a compact, low mass, low power vegetable production system for astronaut crews. Veggie consists of a light cap containing red, blue, and green LEDs, an extensible transparent bellows, and a baseplate with a root mat reservoir. Seeds are planted in plant pillows, small growing bags that interface with the reservoir. The Veggie technology validation test, VEG-01, was initiated with the first test crop of 'Outredgeous' red romaine lettuce. Prior to flight, lettuce seeds were sanitized and planted in a substrate of arcillite (baked ceramic) mixed with controlled release fertilizer. Upon initiation, astronauts open the packaged plant pillows, install them in the Veggie hardware, and prime the system with water. Operations include plant thinning, watering, and photography. Plants were grown on the ISS for 33 days, harvested, and returned frozen to Earth for analysis. Ground controls were conducted at Kennedy Space Center in controlled environment chambers reproducing ISS conditions of temperature, relative humidity, and CO2. Returned plant samples were analyzed for microbial food safety and chemistry including elements, antioxidants, anthocyanins and phenolics. In addition the entire plant microbiome was sequenced, and returned plant pillows were analyzed via x-ray tomography. Food safety analyses allowed us to gain approvals for future consumption of lettuce by the flight surgeons and the payload safety office. A second crop of lettuce was grown in 2015, and the crew consumed half the produce, with the remainder frozen for later analysis. This growth test was followed by testing of a new crop in Veggie, zinnias. Zinnias were grown to test a longer duration flowering crop in preparation for tests of tomatoes and other fruiting crops in the future. Zinnias were harvested in February. Samples from the second harvest of lettuce and the 3. Implementation of Intellectual Property Law on the International Space Station Science.gov (United States) Mannix, John G. 2002-01-01 Because of the importance of intellectual property rights to the private sector, NASA has developed a reference guide to assist business leaders in understanding how the Intellectual Property Articles of the 1998 Intergovernmental Agreement on the International Space Station will be implemented. This reference guide discusses the statutory, regulatory and programmatic strictures on the deployment, utilization and ownership of intellectual property within the Space Station program. This guide presents an analysis of the intellectual property law aspects of the international agreements and documents pertaining to the International Space Station, and then relates them to NASA's authorities for entering into research and development agreements with private entities. This paper will discuss the reference guide and should aid potential agreement participants in understanding the legal environment for entering into agreements with NASA to fly research and development payloads on the International Space Station. 4. Integrated scheduling and resource management. [for Space Station Information System Science.gov (United States) Ward, M. T. 1987-01-01 This paper examines the problem of integrated scheduling during the Space Station era. Scheduling for Space Station entails coordinating the support of many distributed users who are sharing common resources and pursuing individual and sometimes conflicting objectives. This paper compares the scheduling integration problems of current missions with those anticipated for the Space Station era. It examines the facilities and the proposed operations environment for Space Station. It concludes that the pattern of interdependecies among the users and facilities, which are the source of the integration problem is well structured, allowing a dividing of the larger problem into smaller problems. It proposes an architecture to support integrated scheduling by scheduling efficiently at local facilities as a function of dependencies with other facilities of the program. A prototype is described that is being developed to demonstrate this integration concept. 5. Was Einstein wrong? Space station research may find out CERN Multimedia 2002-01-01 Experiments using ultra-precise clocks on the International Space Station will attempt to check if Einstein's Special Theory of Relativity is correct. Future experiments may also yield evidence of string theory (1 page). 6. STS-111 Onboard Photo of the International Space Station Science.gov (United States) 2002-01-01 Backdropped against the blackness of space is the International Space Station (ISS), as viewed from the approching Space Shuttle Orbiter Endeavour, STS-111 mission, in June 2002. Expedition Five replaced Expedition Four crew after remaining a record-setting 196 days in space. Three spacewalks enabled the STS-111 crew to accomplish the delivery and installation of the Mobile Remote Servicer Base System (MBS), an important part of the Station's Mobile Servicing System that allows the robotic arm to travel the length of the Station, which is necessary for future construction tasks; the replacement of a wrist roll joint on the Station's robotic arm, and the task of unloading supplies and science experiments from the Leonardo Multi-Purpose Logistics Module, which made its third trip to the orbital outpost. The STS-111 mission, the 14th Shuttle mission to visit the ISS, was launched on June 5, 2002 and landed June 19, 2002. 7. Technology for Space Station Evolution: the Data Management System Science.gov (United States) Abbott, L. 1990-01-01 Viewgraphs on the data management system (DMS) for the space station evolution are presented. Topics covered include DMS architecture and implementation approach; and an overview of the runtime object database. 8. Space station operations task force. Panel 4 report: Management integration Science.gov (United States) 1987-01-01 The Management Integration Panel of the Space Station Operations Task Force was chartered to provide a structure and ground rules for integrating the efforts of the other three panels and to address a number of cross cutting issues that affect all areas of space station operations. Issues addressed include operations concept implementation, alternatives development and integration process, strategic policy issues and options, and program management emphasis areas. 9. Total Station Survey Monitoring Through an Observation Window: A ... African Journals Online (AJOL) Windows User houses the total station at a mine. This study briefly discusses total station survey monitoring and developed systematic error correction formula to reduce the effect of glass properties, such as thickness and colour, on distance measurements through a shelter window glass in a surface mine environment. Each developed ... 10. Space station needs, attributes and architectural options study review report Science.gov (United States) 1983-01-01 A manned space station (SS) produces a significant net economic benefit over its cost, as well as providing substantial social and performance benefits. The largest space station benefits arise from the ability of the SS to warehouse parts, orbit replacement units (ORUs) and fuel and thereby increase the Space Transportation System (STS) load factor. Substantial other benefits are made possibly by the basing of a returnable orbital transfer vehicles (ROTV) and the servicing of geosynchronous Earth orbit (GEO) satellites at the SS. It is recommended that a manned space station be placed in a 28.5 degree inclination orbit in 1990. This SS can be designed to grow, to be maintained and to incorporate new technology as it becomes available. It should be augmented with unmanned space platforms at both 28.5 degree and polar inclinations. These platforms are designed to have very high commonality with the SS resource models. 11. AMS-02 on the International Space Station Directory of Open Access Journals (Sweden) 2014-04-01 During the first year in space, several billion events have been recorded. The flight operation, the detector performance, together with early results and perspective for physics measurements are reported. 12. STS-104 Onboard Photograph-International Space Station Science.gov (United States) 2001-01-01 This International Space Station (ISS) image was taken by the STS-104 crew during a fly-around inspection of the ISS after the installment of the Joint Airlock. The inspection occurred shortly after the orbiter Atlantis undocked from the ISS. The Canadarm2, or Space Station Remote Manipulator System (SSRMS), appears to be pointed toward the newly-installed airlock on the station's starboard side. The STS-104 mission marked the completion of the second phase of the station assembly. Since the begirning in July of 2000, 77 tons of hardware have been added to the complex, including the Russian Zvezda Module, the Z1 Truss Assembly, the Pressurized Mating Adapter 3, the P6 Truss and its 240-foot long solar arrays, the U.S. Laboratory Destiny, the Canadarm2, and finally the Quest Airlock. The launch of the Space Shuttle Orbiter Atlantis, STS-104 mission, occurred on July 21, 2001. 13. Using computer graphics to design Space Station Freedom viewing Science.gov (United States) Goldsberry, Betty S.; Lippert, Buddy O.; Mckee, Sandra D.; Lewis, James L., Jr.; Mount, Francis E. 1993-01-01 Viewing requirements were identified early in the Space Station Freedom program for both direct viewing via windows and indirect viewing via cameras and closed-circuit television (CCTV). These requirements reside in NASA Program Definition and Requirements Document (PDRD), Section 3: Space Station Systems Requirements. Currently, analyses are addressing the feasibility of direct and indirect viewing. The goal of these analyses is to determine the optimum locations for the windows, cameras, and CCTV's in order to meet established requirements, to adequately support space station assembly, and to operate on-board equipment. PLAID, a three-dimensional computer graphics program developed at NASA JSC, was selected for use as the major tool in these analyses. PLAID provides the capability to simulate the assembly of the station as well as to examine operations as the station evolves. This program has been used successfully as a tool to analyze general viewing conditions for many Space Shuttle elements and can be used for virtually all Space Station components. Additionally, PLAID provides the ability to integrate an anthropometric scale-modeled human (representing a crew member) with interior and exterior architecture. 14. Linking the space shuttle and space stations early docking technologies from concept to implementation CERN Document Server Shayler, David J 2017-01-01 How could the newly authorized space shuttle help in the U.S. quest to build a large research station in Earth orbit? As a means of transporting goods, the shuttle could help supply the parts to the station. But how would the two entitles be physically linked? Docking technologies had to constantly evolve as the designs of the early space stations changed. It was hoped the shuttle would make missions to the Russian Salyut and American Skylab stations, but these were postponed until the Mir station became available, while plans for getting a new U. S. space station underway were stalled. In Linking the Space Shuttle and Space Stations, the author delves into the rich history of the Space Shuttle and its connection to these early space stations, culminating in the nine missions to dock the shuttle to Mir. By 1998, after nearly three decades of planning and operations, shuttle missions to Mir had resulted in: • A proven system to link up the space shuttle to a space station • Equipment and hands-on experienc... 15. Earth Observations from the International Space Station: Benefits for Humanity Science.gov (United States) Stefanov, William L. 2015-01-01 The International Space Station (ISS) is a unique terrestrial remote sensing platform for observation of the Earth's land surface, oceans, and atmosphere. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted active and passive remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to autonomous, sun-synchronous sensor systems in higher altitude polar orbits. Beginning in May 2012, NASA ISS sensor systems have been available to respond to requests for data through the International Charter, Space and Major Disasters, also known as the "International Disaster Charter" or IDC. Data from digital handheld cameras, multispectral, and hyperspectral imaging systems has been acquired in response to IDC activations and delivered to requesting agencies through the United States Geological Survey. The characteristics of the ISS for Earth observation will be presented, including past, current, and planned NASA, International Partner, and commercial remote sensing systems. The role and capabilities of the ISS for humanitarian benefit, specifically collection of remotely sensed disaster response data, will be discussed. 16. STS-102 Onboard Photograph-International Space Station Science.gov (United States) 2001-01-01 One of the astronauts aboard the Space Shuttle Discovery took this photograph, from the aft flight deck of the Discovery, of the International Space Station (ISS) in orbit. The photo was taken after separation of the orbiter Discovery from the ISS after several days of joint activities and an important crew exchange. 17. STS-92 Onboard Photograph-International Space Station Science.gov (United States) 2000-01-01 As the Space Shuttle Discovery began its separation from the International Space Station (ISS), a crew member captured this view of the ISS, revealing new additions to the complex. Most of the Z1 truss structure is visible, along with the recently installed Pressurized Mating Adapter (PMA-3). 18. Space Station life science research facility - The vivarium/laboratory Science.gov (United States) Hilchey, J. D.; Arno, R. D. 1985-01-01 Research opportunities possible with the Space Station are discussed. The objective of the research program will be study gravity relationships for animal and plant species. The equipment necessary for space experiments including vivarium facilities are described. The cost of the development of research facilities such as the vivarium/laboratory and a bioresearch centrifuge is examined. 19. Acoustic emissions applications on the NASA Space Station Energy Technology Data Exchange (ETDEWEB) Friesel, M.A.; Dawson, J.F.; Kurtz, R.J.; Barga, R.S.; Hutton, P.H.; Lemon, D.K. 1991-08-01 Acoustic emission is being investigated as a way to continuously monitor the space station Freedom for damage caused by space debris impact and seal failure. Experiments run to date focused on detecting and locating simulated and real impacts and leakage. These were performed both in the laboratory on a section of material similar to a space station shell panel and also on the full-scale common module prototype at Boeing's Huntsville facility. A neural network approach supplemented standard acoustic emission detection and analysis techniques. 4 refs., 5 figs., 1 tab. 20. Developing the human-computer interface for Space Station Freedom Science.gov (United States) Holden, Kritina L. 1991-01-01 For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously. 1. Technology for Space Station Evolution. Executive summary and overview Science.gov (United States) 1990-01-01 NASA's Office of Aeronautics and Space Technology (OAST) conducted a workshop on technology for space station evolution 16-19 Jan. 1990. The purpose of this workshop was to collect and clarify Space Station Freedom technology requirements for evolution and to describe technologies that can potentially fill those requirements. These proceedings are organized into an Executive Summary and Overview and five volumes containing the technology discipline presentations. The Executive Summary and Overview contains an executive summary for the workshop, the technology discipline summary packages, and the keynote address. The executive summary provides a synopsis of the events and results of the workshop and the technology discipline summary packages. 2. Scattering Effects of Solar Panels on Space Station Antenna Performance Science.gov (United States) Panneton, Robert J.; Ngo, John C.; Hwu, Shian U.; Johnson, Larry A.; Elmore, James D.; Lu, Ba P.; Kelley, James S. 1994-01-01 Characterizing the scattering properties of the solar array panels is important in predicting Space Station antenna performance. A series of far-field, near-field, and radar cross section (RCS) scattering measurements were performed at S-Band and Ku-Band microwave frequencies on Space Station solar array panels. Based on investigation of the measured scattering patterns, the solar array panels exhibit similar scattering properties to that of the same size aluminum or copper panel mockup. As a first order approximation, and for worse case interference simulation, the solar array panels may be modeled using perfect reflecting plates. Numerical results obtained using the Geometrical Theory of Diffraction (GTD) modeling technique are presented for Space Station antenna pattern degradation due to solar panel interference. The computational and experimental techniques presented in this paper are applicable for antennas mounted on other platforms such as ship, aircraft, satellite, and space or land vehicle. 3. Achievements and challenges of Space Station Freedom's safety review process Science.gov (United States) Robinson, David W. 1993-01-01 The most complex space vehicle in history, Space Station Freedom, is well underway to completion, and System Safety is a vital part of the program. The purpose is to summarize and illustrate the progress that over one-hundred System Safety engineers have made in identifying, documenting, and controlling the hazards inherent in the space station. To date, Space Station Freedom has been reviewed by NASA's safety panels through the first six assembly flights, when Freedom achieves a configuration known as Man Tended Capability. During the eight weeks of safety reviews spread out over a year and a half, over 200 preliminary hazard reports were presented. Along the way NASA and its contractors faced many challenges, made much progress, and even learned a few lessons. 4. Does the underground sidewall station survey method meet MHSA ... African Journals Online (AJOL) The question is asked whether or not this method of sur-veying will meet the MHSA standards of accuracy that was developed for typical hangingwall traverse type networks. Results obtained from a survey closure using a network of clusters of four sidewall stations demonstrates that under the described circumstances it will ... 5. Total station survey monitoring through an observation window: a ... African Journals Online (AJOL) Total stations are used extensively for taking geodetic and engineering survey measurements. These measurements are made possible by accurate observation of targeted points. One example is deformation surveys, slope stability monitoring, in mines. Continuous monitoring necessitates sheltering or housing the ... 6. Military Use of the International Space Station Science.gov (United States) 1988-11-01 Stares, Space Weapons and U.S. Strategy 29-33 (1985). 8 employing hardware already under development. Ultimately, the Navy’s proposal was selected...position. This "divide and conquer" strategy was changed in mid-1987 at the partners’ insistence. 89 by the MOU with Japan’s STA.1 3 Agreement on the... publicists to seek analogous situations and treaties, to engage in creative interpretive efforts, and in some instances, to ignore reality and the 7. STS-98 Onboard Photograph-International Space Station Science.gov (United States) 2001-01-01 The International Space Station (ISS), with its newly attached U.S. Laboratory, Destiny, was photographed by a crew member aboard the Space Shuttle Orbiter Atlantis during a fly-around inspection after Atlantis separated from the Space Station. The Laboratory is shown in the foreground of this photograph. The American-made Destiny module is the cornerstone for space-based research aboard the orbiting platform and the centerpiece of the International Space Station (ISS), where unprecedented science experiments will be performed in the near-zero gravity of space. Destiny will also serve as the command and control center for the ISS. The aluminum module is 8.5-meters (28-feet) long and 4.3-meters (14-feet) in diameter. The laboratory consists of three cylindrical sections and two endcones with hatches that will be mated to other station components. A 50.9-centimeter (20-inch-) diameter window is located on one side of the center module segment. This pressurized module is designed to accommodate pressurized payloads. It has a capacity of 24 rack locations. Payload racks will occupy 15 locations especially designed to support experiments. The Destiny module was built by the Boeing Company under the direction of the Marshall Space Flight Center. 8. A simulation facility for testing Space Station assembly procedures Science.gov (United States) Hajare, Ankur R.; Wick, Daniel T.; Shehad, Nagy M. 1994-11-01 NASA plans to construct the Space Station Freedom (SSF) in one of the most hazardous environments known to mankind - space. It is of the utmost importance that the procedures to assemble and operate the SSF in orbit are both safe and effective. This paper describes a facility designed to test the integration of the telerobotic systems and to test assembly procedures using a real-world robotic arm grappling space hardware in a simulated microgravity environment. 9. The AMS detector heads for the International Space Station CERN Multimedia CERN Video Productions 2011-01-01 The AMS particle detector will take off on 29 April 2011 at 21.47 CEST onboard the very last mission of the space Shuttle Endeavour. AMS, the Alpha Magnetic Spectrometer, will then be installed on the International Space Station from where it will explore the Universe for a period of over 10 years. AMS will address some of the most exciting mysteries of modern physics, looking for antimatter and dark matter in space, phenomena that have remained elusive up to now. 10. The international space station as a free flyer servicing node Science.gov (United States) 1999-01-01 The International Space Station will provide a multitude of opportunities for an expanding customer base to make use of this international resource. One such opportunity is servicing of various visiting vehicles that are in a similar orbit to the station. Servicing may include change-out of payloads, replenishment of consumables, repair, and refurbishment operations. Previous studies have been conducted in which paper'' free flyers have been assessed against the station's ability to accommodate them. Over the last several months though, an already flown free flyer, EURECA, was assessed as a real-life visiting free flyer design reference mission. Issues such as capture/berthing, servicing, logistics support, and stowage were assessed for station design and operational approaches. This paper will highlight critical visiting vehicle design considerations, identify station issues, and provide recommendations for accommodation of a wide range of visiting vehicle requirements of the future. 11. Operability of Space Station Freedom's meteoroid/debris protection system Science.gov (United States) Kahl, Maggie S.; Stokes, Jack W. 1992-01-01 The design of Space Station Freedom's external structure must not only protect the spacecraft from the hazardous environment, but also must be compatible with the extra vehicular activity system for assembly and maintenance. The external procedures for module support are utility connections, external orbital replaceable unit changeout, and maintenance of the meteoroid/debris shields and multilayer insulation. All of these interfaces require proper man-machine engineering to be compatible with the extra vehicular activity and manipulator systems. This paper discusses design solutions, including those provided for human interface, to the Space Station Freedom meteoroid/debris protection system. The system advantages and current access capabilities are illustrated through analysis of its configuration over the Space Station Freedom resource nodes and common modules, with emphasis on the cylindrical sections and endcones. 12. United States Space Station technical and programmatic interfaces Science.gov (United States) Carlisle, Richard F.; Rice, William E. 1987-01-01 This paper describes the design of the U.S. Space Station and explains the control factors used for internal and external interfaces among the various government and contractor participants. It discusses the documentation of the U.S. Space Station Program including the Program Approval Document (PAD), the Program Plans (PPs), the Program Requirements Document (PRD), the Program Definition and Requirements Document (PDRD), the Level III project plans, and the Level III project design requirements documents. It discusses the relationship of Space Station documentation to the international Memoranda of Understanding (MOUs) and the Joint PP, PRD, and PDRD, the interrelationship of the Architectural Control Documents (ACDs), the Baseline Control Document (BCD), and the Interface Requirement Documents (IRDs) and Interface Control Documents (ICDs). Also included are the controlling functions of the various NASA and contractor participants and the international partners. 13. Space Station Freedom pressurized element interior design process Science.gov (United States) Hopson, George D.; Aaron, John; Grant, Richard L. 1990-01-01 The process used to develop the on-orbit working and living environment of the Space Station Freedom has some very unique constraints and conditions to satisfy. The goal is to provide maximum efficiency and utilization of the available space, in on-orbit, zero G conditions that establishes a comfortable, productive, and safe working environment for the crew. The Space Station Freedom on-orbit living and working space can be divided into support for three major functions: (1) operations, maintenance, and management of the station; (2) conduct of experiments, both directly in the laboratories and remotely for experiments outside the pressurized environment; and (3) crew related functions for food preparation, housekeeping, storage, personal hygiene, health maintenance, zero G environment conditioning, and individual privacy, and rest. The process used to implement these functions, the major requirements driving the design, unique considerations and constraints that influence the design, and summaries of the analysis performed to establish the current configurations are described. Sketches and pictures showing the layout and internal arrangement of the Nodes, U.S. Laboratory and Habitation modules identify the current design relationships of the common and unique station housekeeping subsystems. The crew facilities, work stations, food preparation and eating areas (galley and wardroom), and exercise/health maintenance configurations, waste management and personal hygiene area configuration are shown. U.S. Laboratory experiment facilities and maintenance work areas planned to support the wide variety and mixtures of life science and materials processing payloads are described. 14. Space Station Radiator Test Hosted by NASA Lewis at Plum Brook Station Science.gov (United States) Speth, Randall C. 1998-01-01 In April of 1997, the NASA Lewis Research Center hosted the testing of the photovoltaic thermal radiator that is to be launched in 1999 as part of flight 4A of the International Space Station. The tests were conducted by Lockheed Martin Vought Systems of Dallas, who built the radiator. This radiator, and three more like it, will be used to cool the electronic system and power storage batteries for the space station's solar power system. Three of the four units will also be used early on to cool the service module. 15. An assessment of clinical chemical sensing technology for potential use in space station health maintenance facility Science.gov (United States) 1987-01-01 A Health Maintenance Facility is currently under development for space station application which will provide capabilities equivalent to those found on Earth. This final report addresses the study of alternate means of diagnosis and evaluation of impaired tissue perfusion in a microgravity environment. Chemical data variables related to the dysfunction and the sensors required to measure these variables are reviewed. A technology survey outlines the ability of existing systems to meet these requirements. How the candidate sensing system was subjected to rigorous testing is explored to determine its suitability. Recommendations for follow-on activities are included that would make the commercial system more appropriate for space station applications. 16. Life sciences biomedical research planning for Space Station Science.gov (United States) Primeaux, Gary R.; Michaud, Roger; Miller, Ladonna; Searcy, Jim; Dickey, Bernistine 1987-01-01 The Biomedical Research Project (BmRP), a major component of the NASA Life Sciences Space Station Program, incorporates a laboratory for the study of the effects of microgravity on the human body, and the development of techniques capable of modifying or counteracting these effects. Attention is presently given to a representative scenario of BmRP investigations and associated engineering analyses, together with an account of the evolutionary process by which the scenarios and the Space Station design requirements they entail are identified. Attention is given to a tether-implemented 'variable gravity centrifuge'. 17. Conceptual planning for Space Station life sciences human research project Science.gov (United States) Primeaux, Gary R.; Miller, Ladonna J.; Michaud, Roger B. 1986-01-01 The Life Sciences Research Facility dedicated laboratory is currently undergoing system definition within the NASA Space Station program. Attention is presently given to the Humam Research Project portion of the Facility, in view of representative experimentation requirement scenarios and with the intention of accommodating the Facility within the Initial Operational Capability configuration of the Space Station. Such basic engineering questions as orbital and ground logistics operations and hardware maintenance/servicing requirements are addressed. Biospherics, calcium homeostasis, endocrinology, exercise physiology, hematology, immunology, muscle physiology, neurosciences, radiation effects, and reproduction and development, are among the fields of inquiry encompassed by the Facility. 18. Development of a Space Station Operations Management System Science.gov (United States) Brandli, A. E.; Mccandless, W. T. 1988-01-01 To enhance the productivity of operations aboard the Space Station, a means must be provided to augment, and frequently to supplant, human effort in support of mission operations and management, both on the ground and onboard. The Operations Management System (OMS), under development at the Johnson Space Center, is one such means. OMS comprises the tools and procedures to facilitate automation of station monitoring, control, and mission planning tasks. OMS mechanizes, and hence rationalizes, execution of tasks traditionally performed by mission planners, the mission control center team, onboard System Management software, and the flight crew. 19. Teleprogramming a cooperative space robotic workcell for space station Science.gov (United States) Haule, Damian D.; Noorhosseini, S. M.; Malowany, Alfred S. 1992-11-01 The growing insight into the complexity and cost of in-orbit operations of future space missions strengthens the belief that a significant amount of automation will be needed to operate the orbital laboratories in a safe, efficient, and economic way. Thus, Automation & Robotics (A&R) technology is vital for unmanned exploration missions to comets and planets. While part of the space worksite may be structured, the space environment is generally unstructured. By structured,' we mean environments that are designed and engineered to somehow cooperate' with the machine. In addition, the structured part of the space worksite may be damaged or in an unknown condition. This lack of structure, as well as the non- repetitive nature of the tasks, require constant adaptation to the space environment by the robot. This is the motivation for increased space robot autonomy. However, complete autonomy is still beyond the scope of today's state-of-the-art in the case of a system executing a complete mission in a hazardous environment such as space. A systematic approach for the development of A&R technologies will reduce the lead-times and costs of facilities for recurrent basic tasks. A space robotic workcell (SRW) is a collection of robots, sensors, and other industrial equipment grouped in a cooperative environment to perform various complex tasks in space. Due to their distributed nature, the control and programming of SRWs is often a difficult task. The issues involved in order to design a real-time teleprogrammable SRW system that performs intervention tasks at remote unstructured sites are summarized. The concept of remotely operated autonomous robots' (i.e., robots teleprogrammed and telesupervised at the task level while at a space worksite) is also developed via telepresence for human-machine interface and voice/speech programming. This paper makes an assessment of the role that teleprogramming may have in furthering the automation capabilities of space teleoperated 20. Managing NASA's International Space Station Logistics and Maintenance Program Science.gov (United States) Butina, Anthony 2001-01-01 The International Space Station's Logistics and Maintenance program has had to develop new technologies and a management approach for both space and ground operations. The ISS will be a permanently manned orbiting vehicle that has no landing gear, no international borders, and no organizational lines - it is one Station that must be supported by one crew, 24 hours a day, 7 days a week, 365 days a year. It flies partially assembled for a number of years before it is finally completed in 2006. It has over 6,000 orbital replaceable units (ORU), and spare parts which number into the hundreds of thousands, from 127 major US vendors and 70 major international vendors. From conception to operation, the ISS requires a unique approach in all aspects of development and operations. Today the dream is coming true; hardware is flying and hardware is failing. The system has been put into place to support the Station for both space and ground operations. It started with the basic support concept developed for Department of Defense systems, and then it was tailored for the unique requirements of a manned space vehicle. Space logistics is a new concept that has wide reaching consequences for both space travel and life on Earth. This paper discusses what type of organization has been put into place to support both space and ground operations and discusses each element of that organization. In addition, some of the unique operations approaches this organization has had to develop is discussed. 1. Function, form, and technology - The evolution of Space Station in NASA Science.gov (United States) Fries, S. D. 1985-01-01 The history of major Space Station designs over the last twenty-five years is reviewed. The evolution of design concepts is analyzed with respect to the changing functions of Space Stations; and available or anticipated technology capabilities. Emphasis is given to the current NASA Space Station reference configuration, the 'power tower'. Detailed schematic drawings of the different Space Station designs are provided. 2. Space Station Freedom: a unique laboratory for gravitational biology research Science.gov (United States) Phillips, R. W.; Cowing, K. L. 1993-01-01 The advent of Space Station Freedom (SSF) will provide a permanent laboratory in space with unparalleled opportunities to perform biological research. As with any spacecraft there will also be limitations. It is our intent to describe this space laboratory and present a picture of how scientists will conduct research in this unique environment we call space. SSF is an international venture which will continue to serve as a model for other peaceful international efforts. It is hoped that as the human race moves out from this planet back to the moon and then on to Mars that SSF can serve as a successful example of how things can and should be done. 3. NCERA-101 STATION REPORT - KENNEDY SPACE CENTER: Large Plant Growth Hardware for the International Space Station Science.gov (United States) Massa, Gioia D. 2013-01-01 This is the station report for the national controlled environments meeting. Topics to be discussed will include the Veggie and Advanced Plant Habitat ISS hardware. The goal is to introduce this hardware to a potential user community. 4. Requirements for modeling airborne microbial contamination in space stations Science.gov (United States) Van Houdt, Rob; Kokkonen, Eero; Lehtimäki, Matti; Pasanen, Pertti; Leys, Natalie; Kulmala, Ilpo 2018-03-01 Exposure to bioaerosols is one of the facets that affect indoor air quality, especially for people living in densely populated or confined habitats, and is associated to a wide range of health effects. Good indoor air quality is thus vital and a prerequisite for fully confined environments such as space habitats. Bioaerosols and microbial contamination in these confined space stations can have significant health impacts, considering the unique prevailing conditions and constraints of such habitats. Therefore, biocontamination in space stations is strictly monitored and controlled to ensure crew and mission safety. However, efficient bioaerosol control measures rely on solid understanding and knowledge on how these bioaerosols are created and dispersed, and which factors affect the survivability of the associated microorganisms. Here we review the current knowledge gained from relevant studies in this wide and multidisciplinary area of bioaerosol dispersion modeling and biological indoor air quality control, specifically taking into account the specific space conditions. 5. Nitrogen Oxygen Recharge System for the International Space Station Science.gov (United States) Williams, David E.; Dick, Brandon; Cook, Tony; Leonard, Dan 2009-01-01 The International Space Station (ISS) requires stores of Oxygen (O2) and Nitrogen (N2) to provide for atmosphere replenishment, direct crew member usage, and payload operations. Currently, supplies of N2/O2 are maintained by transfer from the Space Shuttle. Following Space Shuttle is retirement in 2010, an alternate means of resupplying N2/O2 to the ISS is needed. The National Aeronautics and Space Administration (NASA) has determined that the optimal method of supplying the ISS with O2/N2 is using tanks of high pressure N2/O2 carried to the station by a cargo vehicle capable of docking with the ISS. This paper will outline the architecture of the system selected by NASA and will discuss some of the design challenges associated with this use of high pressure oxygen and nitrogen in the human spaceflight environment. 6. Biotechnological experiments in space flights on board of space stations Science.gov (United States) Nechitailo, Galina S. 2012-07-01 Space flight conditions are stressful for any plant and cause structural-functional transition due to mobiliation of adaptivity. In space flight experiments with pea tissue, wheat and arabidopsis we found anatomical-morphological transformations and biochemistry of plants. In following experiments, tissue of stevia (Stevia rebaudiana), potato (Solanum tuberosum), callus culture and culture and bulbs of suffron (Crocus sativus), callus culture of ginseng (Panax ginseng) were investigated. Experiments with stevia carried out in special chambers. The duration of experiment was 8-14 days. Board lamp was used for illumination of the plants. After experiment the plants grew in the same chamber and after 50 days the plants were moved into artificial ionexchange soil. The biochemical analysis of plants was done. The total concentration of glycozides and ratio of stevioside and rebauside were found different in space and ground plants. In following generations of stevia after flight the total concentration of stevioside and rebauside remains higher than in ground plants. Experiments with callus culture of suffron carried out in tubes. Duration of space flight experiment was 8-167 days. Board lamp was used for illumination of the plants. We found picrocitina pigment in the space plants but not in ground plants. Tissue culture of ginseng was grown in special container in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 167 days. Biological activity of space flight culutre was in 5 times higher than the ground culture. This difference was observed after recultivation of space flight samples on Earth during year after flight. Callus tissue of potato was grown in tubes in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 14 days. Concentration of regenerates in flight samples was in 5 times higher than in ground samples. The space flight experiments show, that microgravity and other 7. International Space Station Crew Return Vehicle: X-38. Educational Brief. Science.gov (United States) National Aeronautics and Space Administration, Washington, DC. The International Space Station (ISS) will provide the world with an orbiting laboratory that will have long-duration micro-gravity experimentation capability. The crew size for this facility will depend upon the crew return capability. The first crews will consist of three astronauts from Russia and the United States. The crew is limited to three… 8. Use of international space station for fundamental physics research Science.gov (United States) Israelsson, U.; Lee, M. C. 2002-01-01 NASA's research plans aboard the International Space Station (ISS) are discussed. Experiments in low temperature physics and atomic physics are planned to commence in late 2005. Experiments in gravitational physics are planned to begin in 2007. A low temperature microgravity physics facility is under development for the low temperature and gravitation experiments. 9. NCERA-101 Station Report from Kennedy Space Center, FL, USA Science.gov (United States) Massa, Gioia D.; Wheeler, Raymond M. 2014-01-01 This is our annual report to the North Central Extension Research Activity, which is affiliated with the USDA and Land Grant University Agricultural Experiment Stations. I have been a member of this committee for 25 years. The presentation will be given by Dr. Gioia Massa, Kennedy Space Center 10. A continuum model for dynamic analysis of the Space Station Science.gov (United States) Thomas, Segun 1989-01-01 Dynamic analysis of the International Space Station using MSC/NASTRAN had 1312 rod elements, 62 beam elements, 489 nodes and 1473 dynamic degrees of freedom. A realtime, man-in-the-loop simulation of such a model is impractical. This paper discusses the mathematical model for realtime dynamic simulation of the Space Station. Several key questions in structures and structural dynamics are addressed. First, to achieve a significant reduction in the number of dynamic degrees of freedom, a continuum equivalent representation of the Space Station truss structure which accounted for the unsymmetry of the basic configuration and resulted in the coupling of extensional and transverse deformation, is developed. Next, dynamic equations for the continuum equivalent of the Space Station truss structure are formulated using a matrix version of Kane's dynamical equations. Flexibility is accounted for by using a theory that accommodates extension, bending in two principal planes and shear displacement. Finally, constraint equations suitable for dynamic analysis of flexible bodies with closed loop configuration are developed and solution of the resulting system of equations is based on the zero eigenvalue theorem. 11. Space vehicle with customizable payload and docking station Energy Technology Data Exchange (ETDEWEB) Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin; Seitz, Daniel 2018-01-30 A "black box" space vehicle solution may allow a payload developer to define the mission space and provide mission hardware within a predetermined volume and with predetermined connectivity. Components such as the power module, radios and boards, attitude determination and control system (ADCS), command and data handling (C&DH), etc. may all be provided as part of a "stock" (i.e., core) space vehicle. The payload provided by the payload developer may be plugged into the space vehicle payload section, tested, and launched without custom development of core space vehicle components by the payload developer. A docking station may facilitate convenient development and testing of the space vehicle while reducing handling thereof. 12. [Monitoring of microbial degraders in manned space stations]. Science.gov (United States) Alekhova, T A; Aleksandrova, A A; Novozhilova, T Iu; Lysak, L V; Zagustina, N A; Bezborodov, A M 2005-01-01 Samples of microorganisms from the surface of constructions of Mir Space Station (Mir SS) were taken and examined after 13 years of operation. The following microorganisms were isolated and identified: 12 fungal species belonging to the genera Penicillium, Aspergillus, Cladosporium, and Aureobasidium; 3 yeast species belonging to the genera Debaryomyces, Candida, and Rhodotorula; and 4 bacterial species belonging to the genera Bacillus, Myxococcus, and Rhodococcus. The predominant species in all samples was Penicillium chrisogenum. It was shown that the fungi isolated could damage polymers and induce corrosion of aluminum-magnesium alloys. We commenced a study of microbial degraders on constructions of the Russian section of the International Space Station (RS ISS). Twenty-six species of fungi, bacteria, yeasts, and actinomycetes, known as active biodegraders, were identified in three sample sets taken at intervals. We founded a collection of microorganisms surviving throughout space flights. This collection can be used to test spacecraft production materials, in order to determine their resistance to biodegradation. 13. Space Station Freedom - Approaching the critical design phase Science.gov (United States) Kohrs, Richard H.; Huckins, Earle, III 1992-01-01 The status and future developments of the Space Station Freedom are discussed. To date detailed design drawings are being produced to manufacture SSF hardware. A critical design review (CDR) for the man-tended capability configuration is planned to be performed in 1993 under the SSF program. The main objective of the CDR is to enable the program to make a full commitment to proceed to manufacture parts and assemblies. NASA recently signed a contract with the Russian space company, NPO Energia, to evaluate potential applications of various Russian space hardware for on-going NASA programs. 14. Space station needs, attributes and architectural options study. Final executive review Science.gov (United States) 1983-01-01 Identification and validation of missions, the benefits of manned presence in space, attributes and architectures, space station requirements, orbit selection, space station architectural options, technology selection, and program planning are addressed. 15. Impact verification of space suit design for space station Science.gov (United States) Fish, Richard H. 1987-01-01 The ballistic limits of single sheet and double sheet structures made of 6061 T6 Aluminum of 1.8 mm and larger nominal thickness were investigated for projectiles of 1.5 mm diameter fired in the Vertical Gun Range Test Facility and NASA Ames Research Center. The hole diameters and sheet deformation behavior were studied for various ratios of sheet spacing to projectile diameter. The results indicate that for projectiles of less than 1.5 mm diameter the ballistic limit exceeds the nominal 10 km/sec orbital debris encounter velocity, if a single-sheet suit of 1.8 mm thickness is behind a single bumper sheet of 1 mm thickness spaced 12.5 mm apart. 16. Overview of Materials International Space Station Experiment 7B Science.gov (United States) Jaworske, Donald A.; Siamidis, John 2009-01-01 Materials International Space Station Experiment 7B (MISSE 7B) is the most recent in a series of experiments flown on the exterior of International Space Station for the purpose of determining the durability of materials and components in the space environment. A collaborative effort among the Department of Defense, the National Aeronautics and Space Administration, industry, and academia, MISSE 7B will be flying a number of NASA experiments designed to gain knowledge in the area of space environmental effects to mitigate risk for exploration missions. Consisting of trays called Passive Experiment Containers, the suitcase sized payload opens on hinges and allows active and passive experiments contained within to be exposed to the ram and wake or zenith and nadir directions in low Earth orbit, in essence, providing a test bed for atomic oxygen exposure, ultraviolet radiation exposure, charged particle radiation exposure, and thermal cycling. New for MISSE 7B is the ability to monitor experiments actively, with data sent back to Earth via International Space Station communications. NASA?s active and passive experiments cover a range of interest for the Agency. Materials relevant to the Constellation Program include: solar array materials, seal materials, and thermal protection system materials. Materials relevant to the Exploration Technology Development Program include: fabrics for spacesuits, materials for lunar dust mitigation, and new thermal control coatings. Sensors and components on MISSE 7B include: atomic oxygen fluence monitors, ultraviolet radiation sensors, and electro-optical components. In addition, fundamental space environmental durability science experiments are being flown to gather atomic oxygen erosion data and thin film polymer mechanical and optical property data relevant to lunar lander insulation and the James Web Space Telescope. This paper will present an overview of the NASA experiments to be flown on MISSE 7B, along with a summary of the 17. Medical applications of space light-emitting diode technology-space station and beyond Science.gov (United States) Whelan, Harry T.; Houle, John M.; Donohoe, Deborah L.; Bajic, Dawn M.; Schmidt, Meic H.; Reichert, Kenneth W.; Weyenberg, George T.; Larson, David L.; Meyer, Glenn A.; Caviness, James A. 1999-01-01 Space light-emitting diode (LED) technology has provided medicine with a new tool capable of delivering light deep into tissues of the body, at wavelengths which are biologically optimal for cancer treatment and wound healing. This LED technology has already flown on Space Shuttle missions, and shows promise for wound healing applications of benefit to Space Station astronauts. 18. Medical Applications of Space Light-Emitting Diode Technology--Space Station and Beyond Energy Technology Data Exchange (ETDEWEB) Whelan, H.T.; Houle, J.M.; Donohoe, D.L.; Bajic, D.M.; Schmidt, M.H.; Reichert, K.W.; Weyenberg, G.T.; Larson, D.L.; Meyer, G.A.; Caviness, J.A. 1999-06-01 Space light-emitting diode (LED) technology has provided medicine with a new tool capable of delivering light deep into tissues of the body, at wavelengths which are biologically optimal for cancer treatment and wound healing. This LED technology has already flown on Space Shuttle missions, and shows promise for wound healing applications of benefit to Space Station astronauts. 19. Inspiring the Next Generation: The International Space Station Education Accomplishments Science.gov (United States) Alleyne, Camille W.; Hasbrook, Pete; Knowles, Carolyn; Chicoine, Ruth Ann; Miyagawa, Yayoi; Koyama, Masato; Savage, Nigel; Zell, Martin; Biryukova, Nataliya; Pinchuk, Vladimir; 2014-01-01 The International Space Station (ISS) has a unique ability to capture the imagination of both students and teachers worldwide. Since 2000, the presence of humans onboard ISS has provided a foundation for numerous educational activities aimed at capturing that interest and motivating study in the sciences, technology, engineering and mathematics (STEM). Over 43 million students around the world have participated in ISS-related educational activities. Projects such as YouTube Space Lab, Sally Ride Earth Knowledge-based Acquired by Middle Schools (EarthKAM), SPHERES (Synchronized Position Hold Engage and Reorient Experimental Satellites) Zero-Robotics, Tomatosphere, and MAI-75 events among others have allowed for global student, teacher and public access to space through student classroom investigations and real-time audio and video contacts with crewmembers. Educational activities are not limited to STEM but encompass all aspects of the human condition. This is well illustrated in the Uchu Renshi project, a chain poem initiated by an astronaut while in space and continued and completed by people on Earth. With ISS operations now extended to 2024, projects like these and their accompanying educational materials are available to more students around the world. From very early on in the program's history, students have been provided with a unique opportunity to get involved and participate in science and engineering projects. Many of these projects support inquiry-based learning that allows students to ask questions, develop hypothesis-derived experiments, obtain supporting evidence and identify solutions or explanations. This approach to learning is well-published as one of the most effective ways to inspire students to pursue careers in scientific and technology fields. Ever since the first space station element was launched, a wide range of student experiments and educational activities have been performed, both individually and collaboratively, by all the 20. Science.gov (United States) Clement, J. L.; Ritsher, J. B.; Saylor, S. A.; Kanas, N. 2006-01-01 Operating the International Space Station (ISS) involves an indefinite, continuous series of long-duration international missions, and this requires an unprecedented degree of cooperation across multiple sites, organizations, and nations. ISS flight controllers have had to find ways to maintain effective team performance in this challenging new context. The goal of this study was to systematically identify and evaluate the major leadership and cultural challenges faces by ISS flight controllers, and to highlight the approaches that they have found most effective to surmount these challenges. We conducted a qualitative survey using a semi-structured interview. Subjects included 14 senior NASA flight controllers who were chosen on the basis of having had substantial experience working with international partners. Data were content analyzed using an iterative process with multiple coders and consensus meetings to resolve discrepancies. To further explore the meaning of the interview findings, we also conducted some new analyses of data from a previous questionnaire study of Russian and American ISS mission control personnel. The interview data showed that respondents had substantial consensus on several leadership and cultural challenges and on key strategies for dealing with them, and they offered a wide range of specific tactics for implementing these strategies. Surprisingly few respondents offered strategies for addressing the challenge of working with team members whose native language is not American English. The questionnaire data showed that Americans think it is more important than Russians that mission control personnel speak the same dialect of one shared common language. Although specific to the ISS program, our results are consistent with recent management, cultural, and aerospace research. We aim to use our results to improve training for current and future ISS flight controllers. 1. International Space Station Data Collection for Disaster Response Science.gov (United States) Stefanov, William L.; Evans, Cynthia A.. 2014-01-01 Natural disasters - including such events as tropical storms, earthquakes, floods, volcanic eruptions, and wildfires -effect hundreds of millions of people worldwide, and also cause billions of dollars (USD) in damage to the global economy. Remotely sensed data acquired by orbital sensor systems has emerged as a vital tool to identify the extent of damage resulting from a natural disaster, as well as providing near-real time mapping support to response efforts on the ground and humanitarian aid efforts. The International Space Station (ISS) is a unique terrestrial remote sensing platform for acquiring disaster response imagery. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to free-flyer based, sun-synchronous sensor systems in higher altitude polar orbits. While several nations have well-developed terrestrial remote sensing programs and assets for data collection, many developing nations do not have ready access to such resources. The International Charter, Space and Major Disasters (also known as the "International Disaster Charter", or IDC; http://www.disasterscharter.org/home) addresses this disparity. It is an agreement between agencies of several countries to provide - on a best-effort basis - remotely sensed data of natural disasters to requesting countries in support of disaster response. The lead US agency for interaction with the IDC is the United States Geological Survey (USGS); when an IDC request or "activation" is received, the USGS notifies the science teams for NASA instruments with targeting information for data collection. In the case of the ISS, the Earth Sciences and Remote Sensing (ESRS) Unit, part of the Astromaterials Research and Exploration Science 2. Space Station Workshop: Commercial Missions and User Requirements Science.gov (United States) 1988-01-01 The topics of discussion addressed during a three day workshop on commercial application in space are presented. Approximately half of the program was directed towards an overview and orientation to the Space Station Project; the technical attributes of space; and present and future potential commercial opportunities. The remaining time was spent addressing technological issues presented by previously-formed industry working groups, who attempted to identify the technology needs, problems or issues faced and/or anticipated by the following industries: extraction (mining, agriculture, petroleum, fishing, etc.); fabrication (manufacturing, automotive, aircraft, chemical, pharmaceutical and electronics); and services (communications, transportation and retail robotics). After the industry groups presented their technology issues, the workshop divided into smaller discussion groups composed of: space experts from NASA; academia; industry experts in the appropriate disciplines; and other workshop participants. The needs identified by the industry working groups, space station technical requirements, proposed commercial ventures and other issues related to space commercialization were discussed. The material summarized and reported are the consensus from the discussion groups. 3. Space Station Evolution Logistics Lunar/Mars Initiative Case Science.gov (United States) Tucker, Michael; Thrasher, David; Davidson, Gordon 1990-01-01 The purpose of this study is to examine the potential logistics requirements for Space Station Freedom growth/evolution, and to provide definition of logistics systems for such growth/evolution. Logistics requirements and logistics system elements for the initial Space Station were used as a starting point. In the early part of the study, a small portion of the effort was used to characterize a spectrum of potential SSF growth/evolution possibilities, with the intent of assessing logistics requirements for a few selected SSF concepts. The data characterizing these concepts is included herein. An evolution concept of an R &D SSF & a Lunar/Mars Expedition (data from on-going related studies) began to emerge as a 'reference' concept, so the logistics study effort was shifted to focus on this concept. 4. Performance issues in management of the Space Station Information System Science.gov (United States) Johnson, Marjory J. 1988-01-01 The onboard segment of the Space Station Information System (SSIS), called the Data Management System (DMS), will consist of a Fiber Distributed Data Interface (FDDI) token-ring network. The performance of the DMS in scenarios involving two kinds of network management is analyzed. In the first scenario, how the transmission of routine management messages impacts performance of the DMS is examined. In the second scenario, techniques for ensuring low latency of real-time control messages in an emergency are examined. 5. Fluid Physics Research on the International Space Station Science.gov (United States) Corban, Robert 2000-01-01 This document is a presentation in viewgraph format which reviews the laboratory facilities and their construction for the International Space Station(ISS). Graphic displays of the ISS are included, with special interest in the facilities available on the US Destiny module and other modules which will be used in the study of fluid physics on the ISS. There are also pictures and descriptions of various components of the Fluids and Combustion Facility. 6. Lunar Station: The Next Logical Step in Space Development Science.gov (United States) Pittman, Robert Bruce; Harper, Lynn; Newfield, Mark; Rasky, Daniel J. 2014-01-01 The International Space Station (ISS) is the product of the efforts of sixteen nations over the course of several decades. It is now complete, operational, and has been continuously occupied since November of 20001. Since then the ISS has been carrying out a wide variety of research and technology development experiments, and starting to produce some pleasantly startling results. The ISS has a mass of 420 metric tons, supports a crew of six with a yearly resupply requirement of around 30 metric tons, within a pressurized volume of 916 cubic meters, and a habitable volume of 388 cubic meters. Its solar arrays produce up to 84 kilowatts of power. In the course of developing the ISS, many lessons were learned and much valuable expertise was gained. Where do we go from here? The ISS offers an existence proof of the feasibility of sustained human occupation and operations in space over decades. It also demonstrates the ability of many countries to work collaboratively on a very complex and expensive project in space over an extended period of time to achieve a common goal. By harvesting best practices and lessons learned, the ISS can also serve as a useful model for exploring architectures for beyond low-­- earth-­-orbit (LEO) space development. This paper will explore the concept and feasibility for a Lunar Station. The Station concept can be implemented by either putting the equivalent capability of the ISS down on the surface of the Moon, or by developing the required capabilities through a combination of delivered materials and equipment and in situ resource utilization (ISRU). Scenarios that leverage existing technologies and capabilities as well as capabilities that are under development and are expected to be available within the next 3-­5 years, will be examined. This paper will explore how best practices and expertise gained from developing and operating the ISS and other relevant programs can be applied to effectively developing Lunar Station. 7. Telemetry handling on the Space Station data management system Science.gov (United States) Whitelaw, Virginia A. 1987-01-01 This paper examines the impact of telemetry handling on the design of the onboard networks that are part of the Space Station Data Management System (DMS). An architectural approach to satisfying the DMS requirement for support of the high throughput needed for telemetry transport and for servicing distributed computer systems is discussed. Several of the functionality vs. performance tradeoffs that must be made in developing an optimized mechanism for handling telemetry data in the DMS are considered. 8. International Space Station Lithium-Ion Battery Start-Up Science.gov (United States) Dalton, Penni J.; North, Tim; Bowens, Ebony; Balcer, Sonia 2017-01-01 International Space Station Lithium-Ion Battery Start-Up.The International Space Station (ISS) primary Electric Power System (EPS) was originally designed to use Nickel-Hydrogen (Ni-H2) batteries to store electrical energy. The electricity for the space station is generated by its solar arrays, which charge batteries during insolation for subsequent discharge during eclipse. The Ni-H2 batteries are designed to operate at a 35 depth of discharge (DOD) maximum during normal operation in a Low Earth Orbit. As the oldest of the 48 Ni-H2 battery Orbital Replacement Units (ORUs) has been cycling since September 2006, these batteries are now approaching their end of useful life. In 2010, the ISS Program began the development of Lithium-Ion (Li-ion) batteries to replace the Ni-H2 batteries and concurrently funded a Li-Ion ORU and cell life testing project. The first set of 6 Li-ion battery replacements were launched in December 2016 and deployed in January 2017. This paper will discuss the Li-ion battery on-orbit start-up and the status of the Li-Ion cell and ORU life cycle testing. 9. A simple 5-DOF walking robot for space station application Science.gov (United States) Brown, H. Benjamin, Jr.; Friedman, Mark B.; Kanade, Takeo 1991-01-01 Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams. 10. Doses due to extra-vehicular activity on space stations Energy Technology Data Exchange (ETDEWEB) Deme, S.; Apathy, I.; Feher, I. [KFKI Atomic Energy Research Institute, Budapest (Hungary); Akatov, Y.; Arkhanguelski, V. [Institute of Biomedical Problems, State Scientific Center, Moscow (Russian Federation); Reitz, G. [DLR Institute of Aerospace Medicine, Cologne, Linder Hohe (Germany) 2006-07-01 One of the many risks of long duration space flight is the dose from cosmic radiation, especially during periods of intensive solar activity. At such times, particularly during extra-vehicular activity (E.V.A.), when the astronauts are not protected by the wall of the spacecraft, cosmic radiation is a potentially serious health threat. Accurate dose measurement becomes increasingly important during the assembly of large space objects. Passive integrating detector systems such as thermoluminescent dosimeters (TLDs) are commonly used for dosimetric mapping and personal dosimetry on space vehicles. K.F.K.I. Atomic Energy Research Institute has developed and manufactured a series of thermoluminescent dosimeter systems, called Pille, for measuring cosmic radiation doses in the 3 {mu}Gy to 10 Gy range, consisting of a set of CaSO{sub 4}:Dy bulb dosimeters and a small, compact, TLD reader suitable for on-board evaluation of the dosimeters. Such a system offers a solution for E.V.A. dosimetry as well. By means of such a system, highly accurate measurements were carried out on board the Salyut-6, -7 and Mir Space Stations, on the Space Shuttle, and most recently on several segments of the International Space Station (I.S.S.). The Pille system was used to make the first measurements of the radiation exposure of cosmonauts during E.V.A.. Such E.V.A. measurements were carried out twice (on June 12 and 16, 1987) by Y. Romanenko, the commander of the second crew of Mir. During the E.V.A. one of the dosimeters was fixed in a pocket on the outer surface of the left leg of his space-suit; a second dosimeter was located inside the station for reference measurements. The advanced TLD system Pille 96 was used during the Nasa-4 (1997) mission to monitor the cosmic radiation dose inside the Mir Space Station and to measure the exposure of two of the astronauts during their E.V.A. activities. The extra doses of two E.V.A. during the Euromir 95 and one E.V.A. during the Nasa4 experiment 11. International Space Station -- Fluid Physics Ra;ck Science.gov (United States) 2000-01-01 The optical bench for the Fluids Integrated Rack section of the Fluids and Combustion Facility (FCF) is shown extracted for servicing and with the optical bench rotated 90 degrees for access to the rear elements. The FCF will be installed, in phases, in the Destiny, the U.S. Laboratory Module of the International Space Station (ISS), and will accommodate multiple users for a range of investigations. This is an engineering mockup; the flight hardware is subject to change as designs are refined. The FCF is being developed by the Microgravity Science Division (MSD) at the NASA Glenn Research Center. (Photo credit: NASA/Marshall Space Flight Center) 12. Space Environment Data Acquisition with the Kibo Exposed Facility on the International Space Station (ISS Directory of Open Access Journals (Sweden) T Obara 2010-02-01 Full Text Available The Space Environment Data Acquisition equipment (SEDA, which was mounted on the Exposed Facility (EF of the Japanese Experiment Module (JEM, also known as "Kibo" on the International Space Station (ISS, was developed to measure the space environment along the orbit of the ISS. This payload module, called the SEDA-Attached Payload (AP, began to measure the space environment in August 2009. This paper reports the mission objectives, instrumentation, and current status of the SEDA-AP. 13. Results of microbial research of environment of international space station Science.gov (United States) Novikova, N.; Poddubko, S.; Deshevaya, E.; Polikarpov, N.; Rakova, N. Many years of exploitation of orbital space stations have moved forward ecological problems among which microbial society of the environment plays a most important role. Qualitative and quantitative characteristics of microorganisms in the environment of a space object can change considerably under the influence of conditions of space flight. In the process of exploitation of the International Space Station (ISS) microflora of air, interior surfaces and equipment is monitored on a regular basis to keep continuous assessment of sanitary and microbiological state of the environment. Up to the present time 32 species of microorganisms have been recovered in the ISS, namely 15species f bacteria and 17 species of moldy fungi. In the composition of microbial species mainly nonpathogenic species have been found. However, a number of bacteria discovered on the ISS, particularly some representatives of human microflora, are capable of causing different diseases when human immune system is compromised. Moreover, some bacteria and a considerable number of fungi are known to be potential biodestructors of construction materials, which leads to biodeterioration of construction materials and equipment. Results of our research show that the existing set of life-supporting systems can maintain microbial contamination within regulated levels. Furthermore, constant microbial monitoring of the environment is an integral part, which provides for the safety of space missions. 14. The International Space Station (ISS) Education Accomplishments and Opportunities Science.gov (United States) Alleyne, Camille W.; Blue, Regina; Mayo, Susan 2012-01-01 15. Space Station Freedom - Configuration management approach to supporting concurrent engineering and total quality management. [for NASA Space Station Freedom Program Science.gov (United States) Gavert, Raymond B. 1990-01-01 Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed. 16. Operation of hydrologic data collection stations by the U.S. Geological Survey in 1987 Science.gov (United States) Condes de la Torre, Alberto 1987-01-01 The U.S. Geological Survey operates hydrologic data collection stations nationwide which serve the needs of all levels of government, the private sector, and the general public, for water resources information. During fiscal year 1987, surface water discharge was determined at 10,624 stations; stage data on streams, reservoirs, and lakes were recorded at 1,806 stations; and various surface water quality characteristics were determined at 2,901 stations. In addition, groundwater levels were measured at 32,588 stations, and the quality of groundwater was determined at 9,120 stations. Data on sediment were collected daily at 174 stations and on a periodic basis at 878 stations. Information on precipitation quantity was collected at 909 stations, and the quality of precipitation was analyzed at 78 stations. Data collection platforms for satellite telemetry of hydrologic information were used at 2,292 Geological Survey stations. Funding for the hydrologic stations was derived, either solely or from a combination, from three major sources - the Geological Survey 's Federal Program appropriation, the Federal-State Cooperative Program, and reimbursements from other Federal agencies. The number of hydrologic stations operated by the Geological Survey declined from fiscal year 1983 to 1987. The number of surface water discharge stations were reduced by 452 stations; surface water quality stations declined by 925 stations; groundwater level stations declined by 1,051 stations; while groundwater quality stations increased by 1,472 stations. (Author 's abstract) 17. Modal Testing of Seven Shuttle Cargo Elements for Space Station Science.gov (United States) Kappus, Kathy O.; Driskill, Timothy C.; Parks, Russel A.; Patterson, Alan (Technical Monitor) 2001-01-01 From December 1996 to May 2001, the Modal and Control Dynamics Team at NASA's Marshall Space Flight Center (MSFC) conducted modal tests on seven large elements of the International Space Station. Each of these elements has been or will be launched as a Space Shuttle payload for transport to the International Space Station (ISS). Like other Shuttle payloads, modal testing of these elements was required for verification of the finite element models used in coupled loads analyses for launch and landing. The seven modal tests included three modules - Node, Laboratory, and Airlock, and four truss segments - P6, P3/P4, S1/P1, and P5. Each element was installed and tested in the Shuttle Payload Modal Test Bed at MSFC. This unique facility can accommodate any Shuttle cargo element for modal test qualification. Flexure assemblies were utilized at each Shuttle-to-payload interface to simulate a constrained boundary in the load carrying degrees of freedom. For each element, multiple-input, multiple-output burst random modal testing was the primary approach with controlled input sine sweeps for linearity assessments. The accelerometer channel counts ranged from 252 channels to 1251 channels. An overview of these tests, as well as some lessons learned, will be provided in this paper. 18. From CERN to the International Space Station and back CERN Multimedia CERN. Geneva 2007-01-01 In December I flew on the Space Shuttle Discovery to ISS, the International Space Station. The main objectives were to continue building ISS, deliver consumables, spare parts and experiments and for the exchange of one crew member on ISS. During the 8-day stay at ISS, I participated in three space-walks, but also got the opportunity to perform one experiment, ALTEA, related to radiation in space and light flashes seen by many people in space. I will give a quick personal history, from when I was a Fellow at Cern in 1990 and learned that I could apply to become an ESA astronaut, to when I finally boarded a space craft to launch on Dec. 9th 2006. A 17 minute video will tell the story about the flight itself. The second half of the talk will be about research related to radiation in space that I have been involved in since joining ESA in 1992. In particular, about light flashes that were first reported on Apollo-11 in 1969, and the SilEye detectors flown on Mir and ISS to investigate fluxes of charged particles ... 19. Commercial combustion research aboard the International Space Station Science.gov (United States) Schowengerdt, F. D. 1999-01-01 The Center for Commercial Applications of Combustion in Space (CCACS) is planning a number of combustion experiments to be done on the International Space Station (ISS). These experiments will be conducted in two ISS facilities, the SpaceDRUMS™ Acoustic Levitation Furnace (ALF) and the Combustion Integrated Rack (CIR) portion of the Fluids and Combustion Facility (FCF). The experiments are part of ongoing commercial projects involving flame synthesis of ceramic powders, catalytic combustion, water mist fire suppression, glass-ceramics for fiber and other applications and porous ceramics for bone replacements, filters and catalyst supports. Ground- and parabolic aircraft-based experiments are currently underway to verify the scientific bases and to test prototype flight hardware. The projects have strong external support. 20. Beyond the International Space Station: The Future of Human Spaceflight Science.gov (United States) Rycroft, M. 2002-10-01 What will be the future directions of human spaceflight? That was the key question addressed at this Symposium - in an atmosphere of realism mixed with idealism. Building on the foundations of the International Space Station, will it be space tourism, habitats in space, mining an asteroid on a collision course with Earth, bases on the Moon, or the colonisation of Mars? What robotic missions will be essential before crewed missions can be launched? How will these be financed? And when might it all happen? Many ideas from the USA, Canada, Europe, Russia and Japan were put forward on the "whys" and the "hows" of our future exploration of the final frontier and what is likely to be needed to make dreams come true. The Proceedings of this Symposium are invaluable to all people in industry, government and academia who are interested in the future of human spaceflight. Link: http://www.wkap.nl/prod/b/1-4020-0962-3 1. Space station dynamics, attitude control and momentum management Science.gov (United States) Sunkel, John W.; Singh, Ramen P.; Vengopal, Ravi 1989-01-01 The Space Station Attitude Control System software test-bed provides a rigorous environment for the design, development and functional verification of GN and C algorithms and software. The approach taken for the simulation of the vehicle dynamics and environmental models using a computationally efficient algorithm is discussed. The simulation includes capabilities for docking/berthing dynamics, prescribed motion dynamics associated with the Mobile Remote Manipulator System (MRMS) and microgravity disturbances. The vehicle dynamics module interfaces with the test-bed through the central Communicator facility which is in turn driven by the Station Control Simulator (SCS) Executive. The Communicator addresses issues such as the interface between the discrete flight software and the continuous vehicle dynamics, and multi-programming aspects such as the complex flow of control in real-time programs. Combined with the flight software and redundancy management modules, the facility provides a flexible, user-oriented simulation platform. 2. International Space Station End-of-Life Probabilistic Risk Assessment Science.gov (United States) Duncan, Gary W. 2014-01-01 The International Space Station (ISS) end-of-life (EOL) cycle is currently scheduled for 2020, although there are ongoing efforts to extend ISS life cycle through 2028. The EOL for the ISS will require deorbiting the ISS. This will be the largest manmade object ever to be de-orbited therefore safely deorbiting the station will be a very complex problem. This process is being planned by NASA and its international partners. Numerous factors will need to be considered to accomplish this such as target corridors, orbits, altitude, drag, maneuvering capabilities etc. The ISS EOL Probabilistic Risk Assessment (PRA) will play a part in this process by estimating the reliability of the hardware supplying the maneuvering capabilities. The PRA will model the probability of failure of the systems supplying and controlling the thrust needed to aid in the de-orbit maneuvering. 3. Cathodes Delivered for Space Station Plasma Contactor System Science.gov (United States) Patterson, Michael J. 1999-01-01 The International Space Station's (ISS) power system is designed with high-voltage solar arrays that typically operate at output voltages of 140 to 160 volts (V). The ISS grounding scheme electrically ties the habitat modules, structure, and radiators to the negative tap of the solar arrays. Without some active charge control method, this electrical configuration and the plasma current balance would cause the habitat modules, structure, and radiators to float to voltages as large as -120 V with respect to the ambient space plasma. With such large negative floating potentials, the ISS could have deleterious interactions with the space plasma. These interactions could include arcing through insulating surfaces and sputtering of conductive surfaces as ions are accelerated by the spacecraft plasma sheath. A plasma contactor system was baselined on the ISS to prevent arcing and sputtering. The sole requirement for the system is contained within a single directive (SSP 30000, paragraph 3.1.3.2.1.8): "The Space Station structure floating potential at all points on the Space Station shall be controlled to within 40 V of the ionospheric plasma potential using a plasma contactor." NASA is developing this plasma contactor as part of the ISS electrical power system. For ISS, efficient and rapid emission of high electron currents is required from the plasma contactor system under conditions of variable and uncertain current demand. A hollow cathode plasma source is well suited for this application and was, therefore, selected as the design approach for the station plasma contactor system. In addition to the plasma source, which is referred to as a hollow cathode assembly, or HCA, the plasma contactor system includes two other subsystems. These are the power electronics unit and the xenon gas feed system. The Rocketdyne Division of Boeing North American is responsible for the design, fabrication, assembly, test, and integration of the plasma contactor system. Because of 4. International Space Station: National Laboratory Education Concept Development Report Science.gov (United States) 2006-01-01 The International Space Station (ISS) program has brought together 16 spacefaring nations in an effort to build a permanent base for human explorers in low-Earth orbit, the first stop past Earth in humanity's path into space. The ISS is a remarkably capable spacecraft, by significant margins the largest and most complex space vehicle ever built. Planned for completion in 2010, the ISS will provide a home for laboratories equipped with a wide array of resources to develop and test the technologies needed for future generations of space exploration. The resources of the only permanent base in space clearly have the potential to find application in areas beyond the research required to enable future exploration missions. In response to Congressional direction in the 2005 National Aeronautics and Space Administration (NASA) Authorization Act, NASA has begun to examine the value of these unique capabilities to other national priorities, particularly education. In early 2006, NASA invited education experts from other Federal agencies to participate in a Task Force charged with developing concepts for using the ISS for educational purposes. Senior representatives from the education offices of the Department of Defense, Department of Education, Department of Energy, National Institutes of Health, and National Science Foundation agreed to take part in the Task Force and have graciously contributed their time and energy to produce a plan that lays out a conceptual framework for potential utilization of the ISS for educational activities sponsored by Federal agencies as well as other future users. 5. Space station common module network topology and hardware development Science.gov (United States) Anderson, P.; Braunagel, L.; Chwirka, S.; Fishman, M.; Freeman, K.; Eason, D.; Landis, D.; Lech, L.; Martin, J.; Mccorkle, J. 1990-01-01 Conceptual space station common module power management and distribution (SSM/PMAD) network layouts and detailed network evaluations were developed. Individual pieces of hardware to be developed for the SSM/PMAD test bed were identified. A technology assessment was developed to identify pieces of equipment requiring development effort. Equipment lists were developed from the previously selected network schematics. Additionally, functional requirements for the network equipment as well as other requirements which affected the suitability of specific items for use on the Space Station Program were identified. Assembly requirements were derived based on the SSM/PMAD developed requirements and on the selected SSM/PMAD network concepts. Basic requirements and simplified design block diagrams are included. DC remote power controllers were successfully integrated into the DC Marshall Space Flight Center breadboard. Two DC remote power controller (RPC) boards experienced mechanical failure of UES 706 stud-mounted diodes during mechanical installation of the boards into the system. These broken diodes caused input to output shorting of the RPC's. The UES 706 diodes were replaced on these RPC's which eliminated the problem. The DC RPC's as existing in the present breadboard configuration do not provide ground fault protection because the RPC was designed to only switch the hot side current. If ground fault protection were to be implemented, it would be necessary to design the system so the RPC switched both the hot and the return sides of power. 6. Constrained Burn Optimization for the International Space Station Science.gov (United States) Brown, Aaron J.; Jones, Brandon A. 2017-01-01 In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages. 7. Process material management in the Space Station environment Science.gov (United States) Perry, J. L.; Humphries, W. R. 1988-01-01 The Space Station will provide a unique facility for conducting material-processing and life-science experiments under microgravity conditions. These conditions place special requirements on the U.S. Laboratory for storing and transporting chemicals and process fluids, reclaiming water from selected experiments, treating and storing experiment wastes, and providing vacuum utilities. To meet these needs and provide a safe laboratory environment, the Process Material Management System (PMMS) is being developed. Preliminary design requirements and concepts related to the PMMS are addressed, and the MSFC PMMS breadboard test facility and a preliminary plan for validating the overall system design are discussed. 8. International Space Station Major Constituent Analyzer On-Orbit Performance Science.gov (United States) Gardner, Ben D.; Erwin, Phillip M.; Cougar, Tamara; Ulrich, BettyLynn 2017-01-01 The Major Constituent Analyzer (MCA) is a mass spectrometer based system that measures the major atmospheric constituents on the International Space Station. A number of limited-life components require periodic change-out, including the ORU 02 analyzer and the ORU 08 Verification Gas Assembly. The most recent ORU 02 and ORU 08 assemblies in the LAB MCA are operating nominally. For ORU 02, the ion source filaments and ion pump lifetime continue to be key determinants of MCA performance. Finally, the Node 3 MCA is being brought to an operational configuration. 9. New Results from AMS on the International Space Station CERN Multimedia CERN. Geneva 2014-01-01 The Alpha Magnetic Spectrometer is a precision particle physics detector. It was installed on the International Space Station on May 19, 2011. Results on electrons and positrons from the first 41 billion events will be presented. This includes the behavior of the positron fraction as a function of energy and the observation that the positron fraction reaches its maximum at energy 275 +/- 32 GeV. The measurement of the positron flux and the electron flux shows that both fluxes change their behavior at 30 GeV but the fluxes are significantly different in their magnitude and energy dependence. The combined (e+ + e-) flux will also be presented. 10. Space Station logistics policy - Risk management from the top down Science.gov (United States) Paules, Granville; Graham, James L., Jr. 1990-01-01 Considerations are presented in the area of risk management specifically relating to logistics and system supportability. These considerations form a basis for confident application of concurrent engineering principles to a development program, aiming at simultaneous consideration of support and logistics requirements within the engineering process as the system concept and designs develop. It is shown that, by applying such a process, the chances of minimizing program logistics and supportability risk in the long term can be improved. The problem of analyzing and minimizing integrated logistics risk for the Space Station Freedom Program is discussed. 11. High pressure water electrolysis for space station EMU recharge Science.gov (United States) Lance, Nick; Puskar, Michael; Moulthrop, Lawrence; Zagaja, John 1988-01-01 A high pressure oxygen recharge system (HPORS), is being developed for application on board the Space Station. This electrolytic system can provide oxygen at up to 6000 psia without a mechanical compressor. The Hamilton standard HPORS based on a solid polymer electrolyte system is an extension of the much larger and succesful 3000 psia system of the U.S. Navy. Cell modules have been successfully tested under conditions beyond which spacecraft may encounter during launch. The control system with double redundancy and mechanical backups for all electronically controlled components is designed to ensure a safe shutdown. 12. Ovarian Tumor Cells Studied Aboard the International Space Station (ISS) Science.gov (United States) 2001-01-01 In August 2001, principal investigator Jeanne Becker sent human ovarian tumor cells to the International Space Station (ISS) aboard the STS-105 mission. The tumor cells were cultured in microgravity for a 14 day growth period and were analyzed for changes in the rate of cell growth and synthesis of associated proteins. In addition, they were evaluated for the expression of several proteins that are the products of oncogenes, which cause the transformation of normal cells into cancer cells. This photo, which was taken by astronaut Frank Culbertson who conducted the experiment for Dr. Becker, shows two cell culture bags containing LN1 ovarian carcinoma cell cultures. 13. Management of the Space Station Freedom onboard local area network Science.gov (United States) Miller, Frank W.; Mitchell, Randy C. 1991-01-01 An operational approach is proposed to managing the Data Management System Local Area Network (LAN) on Space Station Freedom. An overview of the onboard LAN elements is presented first, followed by a proposal of the operational guidelines by which management of the onboard network may be effected. To implement the guidelines, a recommendation is then presented on a set of network management parameters which should be made available in the onboard Network Operating System Computer Software Configuration Item and Fiber Distributed Data Interface firmware. Finally, some implications for the implementation of the various network management elements are discussed. 14. International Research Results and Accomplishments From the International Space Station Science.gov (United States) Ruttley, Tara M.; Robinson, Julie A.; Tate-Brown, Judy; Perkins, Nekisha; Cohen, Luchino; Marcil, Isabelle; Heppener, Marc; Hatton, Jason; Tasaki, Kazuyuki; Umemura, Sayaka; 2016-01-01 In 2016, the International Space Station (ISS) partnership published the first-ever compilation of international ISS research publications resulting from research performed on the ISS through 2011. The International Space Station Research Accomplishments: An Analysis of Results From 2000-2011 is a collection of summaries of over 1,200 journal publications that describe ISS research in the areas of biology and biotechnology; Earth and space science; educational activities and outreach; human research; physical sciences; technology development and demonstration; and, results from ISS operations. This paper will summarize the ISS results publications obtained through 2011 on behalf of the ISS Program Science Forum that is made up of senior science representatives across the international partnership. NASA's ISS Program Science office maintains an online experiment database (www.nasa.gov/issscience) that tracks and communicates ISS research activities across the entire ISS partnership, and it is continuously updated. It captures ISS experiment summaries and results and includes citations to the journals, conference proceedings, and patents as they become available. The International Space Station Research Accomplishments: An Analysis of Results From 2000-2011 is a testament to the research that was underway even as the ISS laboratory was being built. It reflects the scientific knowledge gained from ISS research, and how it impact the fields of science in both space and traditional science disciplines on Earth. Now, during a time when utilization is at its busiest, and with extension of the ISS through at least 2024, the ISS partners work together to track the accomplishments and the new knowledge gained in a way that will impact humanity like no laboratory on Earth. The ISS Program Science Forum will continue to capture and report on these results in the form of journal publications, conference proceedings, and patents. We anticipate that successful ISS research will 15. Science.gov (United States) Felice, Ronald R.; Kienlen, Mike 2002-12-01 It is inevitable that the International Space Station (ISS) will play a significant role in the conduct of science in space. However, in order to provide this service to a wide and broad community and to perform it cost effectively, alternative concepts must be considered to complement NASA"s Institutional capability. Currently science payload forward and return data services must compete for higher priority ISS infrastructure support requirements. Furthermore, initial astronaut crews will be limited to a single shift. Much of their time and activities will be required to meet their physical needs (exercise, recreation, etc.), station maintenance, and station operations, leaving precious little time to actively conduct science payload operations. ISS construction plans include the provisioning of several truss mounted, space-hardened pallets, both zenith and nadir facing. The ISS pallets will provide a platform to conduct both earth and space sciences. Additionally, the same pallets can be used for life and material sciences, as astronauts could place and retrieve sealed canisters for long-term micro-gravity exposure. Thus the pallets provide great potential for enhancing ISS science return. This significant addition to ISS payload capacity has the potential to exacerbate priorities and service contention factors within the exiting institution. In order to have it all, i.e., more science and less contention, the pallets must be data smart and operate autonomously so that NASA institutional services are not additionally taxed. Specifically, the "Enhanced Science Capability on the International Space Station" concept involves placing data handling and spread spectrum X-band communications capabilities directly on ISS pallets. Spread spectrum techniques are considered as a means of discriminating between different pallets as well as to eliminate RFI. The data and RF systems, similar to that of "free flyers", include a fully functional command and data handling system 16. Parking Space Occupancy at Rail Stations in Klang Valley Directory of Open Access Journals (Sweden) Ho Phooi Wai 2017-01-01 Full Text Available The development of Klang Valley Integrated Rapid Transit system in Klang Valley, Malaysia has been quickly gaining momentum during the recent years. There will be two new MRT lines (MRT Line 1 and MRT Line 2 and one new LRT line (LRT Line 3 extended from the current integrated rail transit system by year 2020 with more than 90 new rail stations. With the substantial addition of potential rail passengers, there are doubts whether the existing Park and Ride facilities in Klang Valley are able to accommodate the future parking space demand at rail stations. This research studies the parking occupancy at various Park and Ride facilities in Klang Valley namely Taman Jaya, Asia Jaya, Taman Paramount, Taman Bahagia and Kelana Jaya by applying the non-conventional method utilizing Google Earth imageries. Results showed that the parking occupancy rate at these LRT stations were 100% or more before the commencement of LRT extension (Kelana Jaya and Ampang Lines in 2016 and in the range of 36% to 100% after the commencement of LRT extension due to the additionally built car parks and changes in parking pattern with dispersed passenger traffic. 17. International Space Station (ISS) Meteoroid/Orbital Debris Shielding Science.gov (United States) Christiansen, Eric L. 1999-01-01 Design practices to provide protection for International Space Station (ISS) crew and critical equipment from meteoroid and orbital debris (M/OD) Impacts have been developed. Damage modes and failure criteria are defined for each spacecraft system. Hypervolocity Impact -1 - and analyses are used to develop ballistic limit equations (BLEs) for each exposed spacecraft system. BLEs define Impact particle sizes that result in threshold failure of a particular spacecraft system as a function of Impact velocity, angles and particle density. The BUMPER computer code Is used to determine the probability of no penetration (PNP) that falls the spacecraft shielding based on NASA standard meteoroid/debris models, a spacecraft geometry model, and the BLEs. BUMPER results are used to verify spacecraft shielding requirements Low-weight, high-performance shielding alternatives have been developed at the NASA Johnson Space Center (JSC) Hypervelocity Impact Technology Facility (HITF) to meet spacecraft protection requirements. 18. Physical sciences research plans for the International Space Station Science.gov (United States) Trinh, E. H. 2003-01-01 The restructuring of the research capabilities of the International Space Station has forced a reassessment of the Physical Sciences research plans and a re-targeting of the major scientific thrusts. The combination of already selected peer-reviewed flight investigations with the initiation of new research and technology programs will allow the maximization of the ISS scientific and technological potential. Fundamental and applied research will use a combination of ISS-based facilities, ground-based activities, and other experimental platforms to address issues impacting fundamental knowledge, industrial and medical applications on Earth, and the technology required for human space exploration. The current flight investigation research plan shows a large number of principal investigators selected to use the remaining planned research facilities. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved. 19. International Space Station Utilization: Tracking Investigations from Objectives to Results Science.gov (United States) Ruttley, T. M.; Mayo, Susan; Robinson, J. A. 2011-01-01 Since the first module was assembled on the International Space Station (ISS), on-orbit investigations have been underway across all scientific disciplines. The facilities dedicated to research on ISS have supported over 1100 investigations from over 900 scientists representing over 60 countries. Relatively few of these investigations are tracked through the traditional NASA grants monitoring process and with ISS National Laboratory use growing, the ISS Program Scientist s Office has been tasked with tracking all ISS investigations from objectives to results. Detailed information regarding each investigation is now collected once, at the first point it is proposed for flight, and is kept in an online database that serves as a single source of information on the core objectives of each investigation. Different fields are used to provide the appropriate level of detail for research planning, astronaut training, and public communications. http://www.nasa.gov/iss-science/. With each successive year, publications of ISS scientific results, which are used to measure success of the research program, have shown steady increases in all scientific research areas on the ISS. Accurately identifying, collecting, and assessing the research results publications is a challenge and a priority for the ISS research program, and we will discuss the approaches that the ISS Program Science Office employs to meet this challenge. We will also address the online resources available to support outreach and communication of ISS research to the public. Keywords: International Space Station, Database, Tracking, Methods 20. Integration by parts. [associated with Space Station Freedom Science.gov (United States) Barry, Thomas; Scheffer, Terrance J. 1990-01-01 This paper describes the unique integration and verification challenges associated with the Space Station Freedom and an approach to solve these problems using Data Management Systems (DMS) Kits. These DMS Kits will help alleviate the complex integration problems inherent in building, assembling and testing the Space Station. Particular emphasis has been placed on utilizing the capabilities and services of the on-board DMS to provide the integration and verification tools, not only for the DMS but for the other on-board distributed systems as well. DMS Kits are provided to system/software developers across the program. These DMS Kits provide a common set of integration and verification tools and hardware. Each system developer can then utilize, through the kits, a simulation of the complete data processing environment which will be available on orbit. The paper describes the evolution of the integration process from the system level to the final integration of multiple launch packages. DMS Kits are used throughout this process, which addresses both the ground and on-orbit aspects of the problem. 1. Space Station Freedom solar array panels plasma interaction test facility Science.gov (United States) Martin, Donald F.; Mellott, Kenneth D. 1989-01-01 The Space Station Freedom Power System will make extensive use of photovoltaic (PV) power generation. The phase 1 power system consists of two PV power modules each capable of delivering 37.5 KW of conditioned power to the user. Each PV module consists of two solar arrays. Each solar array is made up of two solar blankets. Each solar blanket contains 82 PV panels. The PV power modules provide a 160 V nominal operating voltage. Previous research has shown that there are electrical interactions between a plasma environment and a photovoltaic power source. The interactions take two forms: parasitic current loss (occurs when the currect produced by the PV panel leaves at a high potential point and travels through the plasma to a lower potential point, effectively shorting that portion of the PV panel); and arcing (occurs when the PV panel electrically discharges into the plasma). The PV solar array panel plasma interaction test was conceived to evaluate the effects of these interactions on the Space Station Freedom type PV panels as well as to conduct further research. The test article consists of two active solar array panels in series. Each panel consists of two hundred 8 cm x 8 cm silicon solar cells. The test requirements dictated specifications in the following areas: plasma environment/plasma sheath; outgassing; thermal requirements; solar simulation; and data collection requirements. 2. Mir Contamination Observations and Implications to the International Space Station Science.gov (United States) Soares, Carlos; Mikatarian, Ron 2000-01-01 A series of external contamination measurements were made on the Russian Mir Space Station. The Mir external contamination observations summarized in this paper were essential in assessing the system level impact of Russian Segment induced contamination on the International Space Station (ISS). Mir contamination observations include results from a series of flight experiments: CNES Comes-Aragatz, retrieved NASA camera bracket, Euro-Mir '95 ICA, retrieved NASA Trek blanket, Russian Astra-II, Mir Solar Array Return Experiment (SARE), etc. Results from these experiments were studied in detail to characterize Mir induced contamination. In conjunction with Mir contamination observations, Russian materials samples were tested for condensable outgassing rates in the U.S. These test results were essential in the characterization of Mir contamination sources. Once Mir contamination sources were identified and characterized, activities to assess the implications to ISS were implemented. As a result, modifications in Russian materials selection and/or usage were implemented to control contamination and mitigate risk to ISS. 3. Analysis of International Space Station Vehicle Materials on MISSE 6 Science.gov (United States) Finckenor, Miria; Golden, Johnny; Kravchenko, Michael; O'Rourke, Mary Jane 2010-01-01 The International Space Station Materials and Processes team has multiple material samples on MISSE 6, 7 and 8 to observe Low Earth Orbit (LEO) environmental effects on Space Station materials. Optical properties, thickness/mass loss, surface elemental analysis, visual and microscopic analysis for surface change are some of the techniques employed in this investigation. Results for the following MISSE 6 samples materials will be presented: deionized water sealed anodized aluminum; Hyzod(tm) polycarbonate used to temporarily protect ISS windows; Russian quartz window material; Beta Cloth with Teflon(tm) reformulated without perfluorooctanoic acid (PFOA), and electroless nickel. Discussion for current and future MISSE materials experiments will be presented. MISSE 7 samples are: more deionized water sealed anodized aluminum, including Photofoil(tm); indium tin oxide (ITO) over-coated Kapton(tm) used as thermo-optical surfaces; mechanically scribed tin-plated beryllium-copper samples for "tin pest" growth (alpha/beta transformation); and beta cloth backed with a black coating rather than aluminization. MISSE 8 samples are: exposed "scrim cloth" (fiberglass weave) from the ISS solar array wing material, protective fiberglass tapes and sleeve materials, and optical witness samples to monitor contamination. 4. Nutrititional Status Assessment of International Space Station Crew Members Science.gov (United States) Smith, S. M.; Zwart, S. R.; Block, G.; Rice, B. I.; Davis-Street, J. F. 2005-01-01 Defining optimal nutrient requirements is imperative to ensure crew health on long-duration space exploration missions. To date, nutrient requirement data have been extremely limited because of small sample sizes and difficulties associated with collecting biological samples. In this study, we examined changes in body composition, bone metabolism, hematology, general blood chemistry, and blood levels of selected vitamins and minerals after long-duration (128-195 d) space flight aboard the International Space Station. Crew members consumed an average of 80% of the recommended energy intakes, and on landing day their body weight had decreased (P=0.051). After flight, hematocrit was less, and serum femtin was greater than before flight (Psuperoxide dismutase was less after flight, indicating that oxidative damage had increased (P<0.05). Despite the reported use of vitamin D supplements during flight, serum 25-hydroxyvitamin D was significantly decreased after flight (P<0.01). Bone resorption was increased after flight, as indicated by several urinary markers of bone resorption. Bone formation, assessed by serum concentration of bone-specific alkaline phosphatase, was elevated only in crew members who landed in Russia, probably because of the longer time lapse between landing and sample collection. These data provide evidence that bone loss, compromised vitamin D status, and oxidative damage remain critical concerns for long-duration space flight. 5. Psychosocial Research on the International Space Station: Special Privacy Considerations Science.gov (United States) Kanas, N.; Salnitskiy, V.; Ritsher, J.; Grund, E.; Weiss, D.; Gushin, V.; Kozerenko, O. Conducting psychosocial research with astronauts and cosmonauts requires special privacy and confidentiality precautions due to the high profile nature of the subject population and to individual crewmember perception of the risks inherent in divulging sensitive psychological information. Sampling from this small population necessitates subject protections above and beyond standard scientific human subject protocols. Many of these protections have relevance for psychosocial research on the International Space Station. In our previous study of psychosocial issues involving crewmembers on the Mir space station, special precautions were taken during each phase of the missions. These were implemented in order to gain the trust necessary to ameliorate the perceived risks of divulging potentially sensitive psychological information and to encourage candid responses. Pre-flight, a standard confidentiality agreement was provided along with a special layman's summary indicating that only group-level data would be presented, and subjects chose their own ID codes known only to themselves. In-flight, special procedures and technologies (such as encryption) were employed to protect the data during the collection. Post-flight, an analytic strategy was chosen to further mask subject identifiers, and draft manuscripts were reviewed by the astronaut office prior to publication. All of the eligible five astronauts and eight cosmonauts who flew joint US/Russian missions on the Mir were successfully recruited to participate, and their data completion rate was 76%. Descriptive analyses of the data indicated that there was sufficient variability in all of the measures to indicate that thoughtful, discriminating responses were being provided (e.g., the full range of response options was used in 63 of the 65 items of the Profile of Mood States measure, and both true and false response options were used in all 126 items of the Group Environment and the Work Environment measures). This 6. Microbiology and Crew Medical Events on the International Space Station Science.gov (United States) Oubre, Cherie; Charvat, Jacqueline M.; Kadwa, Biniafer; Taiym, Wafa; Ott, C. Mark; Pierson, Duane; Baalen, Mary Van 2014-01-01 The closed environment of the International Space Station (ISS) creates an ideal environment for microbial growth. Previous studies have identified the ubiquitous nature of microorganisms throughout the space station environment. To ensure safety of the crew, microbial monitoring of air and surface within ISS began in December 2000 and continues to be monitored on a quarterly basis. Water monitoring began in 2009 when the potable water dispenser was installed on ISS. However, it is unknown if high microbial counts are associated with inflight medical events. The microbial counts are determined for the air, surface, and water samples collected during flight operations and samples are returned to the Microbiology laboratory at the Johnson Space Center for identification. Instances of microbial counts above the established microbial limit requirements were noted and compared inflight medical events (any non-injury event such as illness, rashes, etc.) that were reported during the same calendar-quarter. Data were analyzed using repeated measures logistic regression for the forty-one US astronauts flew on ISS between 2000 and 2012. In that time frame, instances of microbial counts being above established limits were found for 10 times for air samples, 22 times for surface samples and twice for water. Seventy-eight inflight medical events were reported among the astronauts. A three times greater risk of a medical event was found when microbial samples were found to be high (OR = 3.01; p =.007). Engineering controls, crew training, and strict microbial limits have been established to mitigate the crew medical events and environmental risks. Due to the timing issues of sampling and the samples return to earth, identification of particular microorganisms causing a particular inflight medical event is difficult. Further analyses are underway. 7. International Space Station Medical Projects - Full Services to Mars Science.gov (United States) Pietrzyk, R. A.; Primeaux, L. L.; Wood, S. J.; Vessay, W. B.; Platts, S. H. 2018-01-01 The International Space Station Medical Projects (ISSMP) Element provides planning, integration, and implementation services for HRP research studies for both spaceflight and flight analog research. Through the implementation of these two efforts, ISSMP offers an innovative way of guiding research decisions to meet the unique challenges of understanding the human risks to space exploration. Flight services provided by ISSMP include leading informed consent briefings, developing and validating in-flight crew procedures, providing ISS crew and ground-controller training, real-time experiment monitoring, on-orbit experiment and hardware operations and facilitating data transfer to investigators. For analog studies at the NASA Human Exploration Research Analog (HERA), the ISSMP team provides subject recruitment and screening, science requirements integration, data collection schedules, data sharing agreements, mission scenarios and facilities to support investigators. The ISSMP also serves as the HRP interface to external analog providers including the :envihab bed rest facility (Cologne, Germany), NEK isolation chamber (Moscow, Russia) and the Antarctica research stations. Investigators working in either spaceflight or analog environments requires a coordinated effort between NASA and the investigators. The interdisciplinary nature of both flight and analog research requires investigators to be aware of concurrent research studies and take into account potential confounding factors that may impact their research objectives. Investigators must define clear research requirements, participate in Investigator Working Group meetings, obtain human use approvals, and provide study-specific training, sample and data collection and procedures all while adhering to schedule deadlines. These science requirements define the technical, functional and performance operations to meet the research objectives. The ISSMP maintains an expert team of professionals with the knowledge and 8. Expanded benefits for humanity from the International Space Station Science.gov (United States) Rai, Amelia; Robinson, Julie A.; Tate-Brown, Judy; Buckley, Nicole; Zell, Martin; Tasaki, Kazuyuki; Karabadzhak, Georgy; Sorokin, Igor V.; Pignataro, Salvatore 2016-09-01 In 2012, the International Space Station (ISS) (Fig. 1) partnership published the updated International Space Station Benefits for Humanity[1], a compilation of stories about the many benefits being realized in the areas of human health, Earth observations and disaster response, and global education. This compilation has recently been revised to include updated statistics on the impacts of the benefits, and new benefits that have developed since the first publication. Two new sections have also been added to the book, economic development of space and innovative technology. This paper will summarize the updates on behalf of the ISS Program Science Forum, made up of senior science representatives across the international partnership. The new section on "Economic Development of Space" highlights case studies from public-private partnerships that are leading to a new economy in low earth orbit (LEO). Businesses provide both transportation to the ISS as well as some research facilities and services. These relationships promote a paradigm shift of government-funded, contractor-provided goods and services to commercially-provided goods purchased by government agencies. Other examples include commercial firms spending research and development dollars to conduct investigations on ISS and commercial service providers selling services directly to ISS users. This section provides examples of ISS as a test bed for new business relationships, and illustrates successful partnerships. The second new section, "Innovative Technology," merges technology demonstration and physical science findings that promise to return Earth benefits through continued research. Robotic refueling concepts for life extensions of costly satellites in geo-synchronous orbit have applications to robotics in industry on Earth. Flame behavior experiments reveal insight into how fuel burns in microgravity leading to the possibility of improving engine efficiency on Earth. Nanostructures and smart fluids are 9. Expanded Benefits for Humanity from the International Space Station Science.gov (United States) Rai, Amelia; Robinson, Julie A.; Tate-Brown, Judy; Buckley, Nicole; Zell, Martin; Tasaki, Kazuyuki; Karabadzhak, Georgy; Sorokin, Igor V.; Pignataro, Salvatore 2016-01-01 In 2012, the International Space Station (ISS) partnership published the updated International Space Station Benefits for Humanity, 2nd edition, a compilation of stories about the many benefits being realized in the areas of human health, Earth observations and disaster response, and global education. This compilation has recently been revised to include updated statistics on the impacts of the benefits, and new benefits that have developed since the first publication. Two new sections have also been added to the book, economic development of space and innovative technology. This paper will summarize the updates on behalf of the ISS Program Science Forum, made up of senior science representatives across the international partnership. The new section on "Economic Development of Space" highlights case studies from public-private partnerships that are leading to a new economy in low earth orbit (LEO). Businesses provide both transportation to the ISS as well as some research facilities and services. These relationships promote a paradigm shift of government-funded, contractor-provided goods and services to commercially-provided goods purchased by government agencies. Other examples include commercial firms spending research and development dollars to conduct investigations on ISS and commercial service providers selling services directly to ISS users. This section provides examples of ISS as a test bed for new business relationships, and illustrates successful partnerships. The second new section, Innovative Technology, merges technology demonstration and physical science findings that promise to return Earth benefits through continued research. Robotic refueling concepts for life extensions of costly satellites in geo-synchronous orbit have applications to robotics in industry on Earth. Flame behavior experiments reveal insight into how fuel burns in microgravity leading to the possibility of improving engine efficiency on Earth. Nanostructures and smart fluids are 10. Space station needs, attributes and architectural options study. Briefing material, mid-term review Science.gov (United States) 1982-01-01 User mission requirements and their relationship to the current space transportation system are examined as a means of assuring the infusion of corporate ideas and knowledge in the space station program. Specific tasks include developing strategies to develop user consistency; determine DOD implication and requirements; and foster industry involvement in the space station. Mission alternatives; accrued benefits; program options; system attributes and characteristics; and a recommended plan for space station evolution are covered. 11. Protein crystal growth and the International Space Station Science.gov (United States) DeLucas, L. J.; Moore, K. M.; Long, M. M. 1999-01-01 Protein structural information plays a key role in understanding biological structure-function relationships and in the development of new pharmaceuticals for both chronic and infectious diseases. The Center for Macromolecular Crystallography (CMC) has devoted considerable effort studying the fundamental processes involved in macromolecular crystal growth both in a 1-g and microgravity environment. Results from experiments performed on more than 35 U.S. space shuttle flights have clearly indicated that microgravity can provide a beneficial environment for macromolecular crystal growth. This research has led to the development of a new generation of pharmaceuticals that are currently in preclinical or clinical trials for diseases such as cutaneous T-cell lymphoma, psoriasis, rheumatoid arthritis, AIDS, influenza, stroke and other cardiovascular complications. The International Space Station (ISS) provides an opportunity to have complete crystallographic capability on orbit, which was previously not possible with the space shuttle orbiter. As envisioned, the x-ray Crystallography Facility (XCF) will be a complete facility for growing protein crystals; selecting, harvesting, and mounting sample crystals for x-ray diffraction; cryo-freezing mounted crystals if necessary; performing x-ray diffraction studies; and downlinking the data for use by crystallographers on the ground. Other advantages of such a facility include crystal characterization so that iterations in the crystal growth conditions can be made, thereby optimizing the final crystals produced in a three month interval on the ISS. 12. Sampling Indoor Aerosols on the International Space Station Science.gov (United States) Meyer, Marit E. 2016-01-01 In a spacecraft cabin environment, the size range of indoor aerosols is much larger and they persist longer than on Earth because they are not removed by gravitational settling. A previous aerosol experiment in 1991 documented that over 90 of the mass concentration of particles in the NASA Space Shuttle air were between 10 m and 100 m based on measurements with a multi-stage virtual impactor and a nephelometer (Liu et al. 1991). While the now-retired Space Shuttle had short duration missions (less than two weeks), the International Space Station (ISS) has been continually inhabited by astronauts for over a decade. High concentrations of inhalable particles on ISS are potentially responsible for crew complaints of respiratory and eye irritation and comments about 'dusty' air. Air filtration is the current control strategy for airborne particles on the ISS, and filtration modeling, performed for engineering and design validation of the air revitalization system in ISS, predicted that PM requirements would be met. However, aerosol monitoring has never been performed on the ISS to verify PM levels. A flight experiment is in preparation which will provide data on particulate matter in ISS ambient air. Particles will be collected with a thermophoretic sampler as well as with passive samplers which will extend the particle size range of sampling. Samples will be returned to Earth for chemical and microscopic analyses, providing the first aerosol data for ISS ambient air. 13. Detecting accelerometric nonlinearities in the international space station Science.gov (United States) Sáez, N.; Gavaldà, Jna.; Ruiz, X.; Shevtsova, V. 2014-10-01 The present work aims to study mechanical nonlinearities detected in the accelerometric records during a thermodiffusion experiment performed at the International Space Station, ISS. In that experiment the test cell was subjected to harmonic vibrations of different frequencies and amplitudes. Accelerometric data associated to the runs were downloaded from NASA PIMS website. Second order spectral analysis shows that the shaker modifies the normality of the data and introduces nonlinearities in the distribution of energy. High Order Spectral Analysis, HOSA, based on the bispectrum, bicoherence, trispectrum and tricoherence functions enabled us to study the kind of these nonlinearities. Additionally, a new method using the biphase and triphase histograms helps us to assess if quadratic and/or cubic phase coupling mechanisms are responsible for the anomalous nonlinear energy transfer detected. Finally, the RMS acceleration values are investigated to check if the vibratory limit requirements of the ISS are exceeded. This methodology is important not only in generic research of aerospace engineering but also in space sciences in order to help space researchers to characterize more globally their experiments. It is mentioned finally that HOSA techniques are not new, but never have been used in the analysis of accelerometric data coming from the ISS. 14. Autonomous Payload Operations Onboard the International Space Station Science.gov (United States) Stetson, Howard K.; Deitsch, David K.; Cruzen, Craig A.; Haddock, Angie T. 2007-01-01 Operating the International Space Station (ISS) involves many complex crew tended, ground operated and combined systems. Over the life of the ISS program, it has become evident that by having automated and autonomous systems on board, more can be accomplished and at the same time reduce the workload of the crew and ground operators. Engineers at the National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center in Huntsville Alabama, working in collaboration with The Charles Stark Draper Laboratory have developed an autonomous software system that uses the Timeliner User Interface Language and expert logic to continuously monitor ISS payload systems, issue commands and signal ground operators as required. This paper describes the development history of the system, its concept of operation and components. The paper also discusses the testing process as well as the facilities used to develop the system. The paper concludes with a description of future enhancement plans for use on the ISS as well as potential applications to Lunar and Mars exploration systems. 15. Space-Hotel EARLY BIRD - A Visionary Prospect of a Space Station Science.gov (United States) Amekrane, R.; Holze, C. 2002-01-01 rachid.amekrane@astrium-space.com/Fax: +49 421 539-24801, cholze@zarm.uni-bremen.de/Fax: The International Space Station was planed for research purposes. In 2001 the first private man, Denis Tito,visited the ISS and the second private man, Mark Shuttleworth is following him. Space pioneers as Wernher von Braun, Sir Arthur C. Clarke had the dream that one day a space station in earth orbit will host tourists. It is evident that the ISS is not designed to host tourists. Therefore this dream is still open. Asking the question "how should a space station should look like to host tourists?" the German Aerospace Society DGLR e.V. initiated in April 2001 a contest under the patronage of Mr. Joerg Feustel-Buechl, the Director of Manned Spaceflight and Microgravity, European Space Agency (ESA). Because the definition and design of living space is the content of architecture the approach was to gather new ideas from young architects in cooperation with space experts. This contest was directed at students of architecture and the task set was to design a hotel for the earth orbit and to accommodate 220 guests. The contest got the name "Early Bird - Visions of a Space Hotel". The results and models of the student's work were shown in an exhibition in Hamburg/Germany, which was open to the public from September 19th till October 20th 2001. During the summer term of 2001 seventeen designs were completed. Having specialists, as volunteers, in the field of space in charge meant that it could be ensured that the designs reflected a certain possibility of being able to be realized. Within this interdisciplinary project both parties learned from each other. The 17 different designs were focused on the expectations and needs of a future space tourist. The design are for sure not feasible today, but the designs are in that sense realistic that they could be built in future. This paper will present the overview of the 17 designs as visions of a future space hotel. The designs used 16. Utilization of common pressurized modules on the Space Station Freedom Science.gov (United States) Mazanek, Daniel D.; Heck, Michael L.; Gould, Marston J. 1991-01-01 During the preliminary design review of Space Station Freedom elements and subsystems, it was shown that reductions of cost, weight, and on-orbit integration and verification would be necessary in order to meet program constraints, particularly nominal Orbiter payload launch capability. At that time, the Baseline station consisted of four resource nodes and two 44 ft modules. In this study, the viability of a common module which maintains crew and payload accommodation is assessed. The size, transportation, and orientation of modules and the accommodation of system racks and user experiments are considered and compared to baseline. Based on available weight estimates, a module pattern consisting of six 28 ft common elements with three radial and two end ports is shown to be nearly optimal. Advantageous characteristics include a reduction in assembly flights, dual egress from all elements, logical functional allocation, no adverse impacts to international partners, favorable airlock, cupola, ACRV (Assured Crew Return Vehicle), and logistics module accommodation, and desirable flight attitude and control characteristics. 17. International Systems Integration on the International Space Station Science.gov (United States) Gerstenmaier, William H.; Ticker, Ronald L. 2007-01-01 Over the next few months, the International Space Station (ISS), and human spaceflight in general, will undergo momentous change. The European Columbus and Japanese Kibo Laboratories will be added to the station joining U.S. and Russian elements already on orbit. Columbus, Jules Vernes Automated Transfer Vehicle (ATV) and Kibo Control Centers will soon be joining control centers in the US and Russia in coordinating ISS operations and research. The Canadian Special Purpose Dexterous Manipulator (SPDM) will be performing extra vehicular activities that previously only astronauts on EVA could do, but remotely and with increased safety. This paper will address the integration of these international elements and operations into the ISS, both from hardware and human perspectives. Interoperability of on-orbit systems and ground control centers and their human operators from Europe, Japan, Canada, Russia and the U.S. pose significant and unique challenges. Coordination of logistical support and transportation of crews and cargo is also a major challenge. As we venture out into the cosmos and inhabit the Moon and other planets, it's the systems and operational experience and partnership development on ISS, humanity's orbiting outpost that is making these journeys possible. 18. Floating Potential Probe Deployed on the International Space Station Science.gov (United States) Ferguson, Dale C. 2001-01-01 In the spring and summer of 2000, at the request of the International Space Station (ISS) Program Office, a Plasma Contactor Unit Tiger Team was set up to investigate the threat of the ISS arcing in the event of a plasma contactor outage. Modeling and ground tests done under that effort showed that it is possible for the external structure of the ISS to become electrically charged to as much as -160 V under some conditions. Much of this work was done in anticipation of the deployment of the first large ISS solar array in November 2000. It was recognized that, with this deployment, the power system would be energized to its full voltage and that the predicted charging would pose an immediate threat to crewmembers involved in extravehicular activities (EVA's), as well as long-term damage to the station structure, were the ISS plasma contactors to be turned off or stop functioning. The Floating Potential Probe was conceived, designed, built, and deployed in record time by a crack team of scientists and engineers led by the NASA Glenn Research Center in response to ISS concerns about crew safety. 19. Discharge ignition behavior of the Space Station plasma contactor Science.gov (United States) Sarver-Verhey, Timothy R.; Hamley, John A. 1995-01-01 Ignition testing of hollow cathode assemblies being developed for the Space Station plasma contactor system has been initiated to validate reliable multiple restart capability. An ignition approach was implemented that was derived from an earlier arcjet program that successfully demonstrated over 11,600 ignitions. For this, a test profile was developed to allow accelerated cyclic testing at expected operating conditions. To date, one hollow cathode assembly has been used to demonstrate multiple ignitions. A prototype hollow cathode assembly has achieved 3,615 successful ignitions at a nominal anode voltage of 18.0 V. During the ignition testing several parameters were investigated, of which the heater power and pre-heat time were the only parameters found to significantly impact ignition rate. 20. Light Microscopy Module: International Space Station Premier Automated Microscope Science.gov (United States) Sicker, Ronald J.; Foster, William M.; Motil, Brian J.; Meyer, William V.; Chiaramonte, Francis P.; Abbott-Hearn, Amber; Atherton, Arthur; Beltram, Alexander; Bodzioney, Christopher; Brinkman, John; 2016-01-01 The Light Microscopy Module (LMM) was launched to the International Space Station (ISS) in 2009 and began hardware operations in 2010. It continues to support Physical and Biological scientific research on ISS. During 2016, if all goes as planned, three experiments will be completed: [1] Advanced Colloids Experiments with Heated base-2 (ACE-H2) and [2] Advanced Colloids Experiments with Temperature control (ACE-T1). Preliminary results, along with an overview of present and future LMM capabilities will be presented; this includes details on the planned data imaging processing and storage system, along with the confocal upgrade to the core microscope. [1] a consortium of universities from the State of Kentucky working through the Experimental Program to Stimulate Competitive Research (EPSCoR): Stuart Williams, Gerold Willing, Hemali Rathnayake, et al. and [2] from Chungnam National University, Daejeon, S. Korea: Chang-Soo Lee, et al. 1. Applications of the International Space Station Probabilistic Risk Assessment Model Science.gov (United States) Grant, Warren; Lutomski, Michael G. 2011-01-01 Recently the International Space Station (ISS) has incorporated more Probabilistic Risk Assessments (PRAs) in the decision making process for significant issues. Future PRAs will have major impact to ISS and future spacecraft development and operations. These PRAs will have their foundation in the current complete ISS PRA model and the current PRA trade studies that are being analyzed as requested by ISS Program stakeholders. ISS PRAs have recently helped in the decision making process for determining reliability requirements for future NASA spacecraft and commercial spacecraft, making crew rescue decisions, as well as making operational requirements for ISS orbital orientation, planning Extravehicular activities (EVAs) and robotic operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decision. This paper will discuss future analysis topics such as life extension, requirements of new commercial vehicles visiting ISS. 2. Viscosity Measurement via Drop Coalescence: A Space Station Experiment Science.gov (United States) Antar, Basil; Ethridge, Edwin C. 2010-01-01 The concept of using low gravity experimental data together with CFD simulations for measuring the viscosity of highly viscous liquids was recently validated on onboard the International Space Station (ISS). A series of microgravity tests were conducted for this purpose on the ISS in July, 2004 and in May of 2005. In these experiments two liquid drops were brought manually together until they touched and were allowed to coalesce under the action of the capillary force alone. The coalescence process was recorded photographically from which the contact radius speed of the merging drops was measured. The liquid viscosity was determined by fitting the measured data with accurate numerical simulation of the coalescence process. Several liquids were tested and for each liquid several drop diameters were employed. Experimental and numerical results will be presented in which the viscosity of several highly viscous liquids were determined using this technique. 3. International Space Station Noise Constraints Flight Rule Process Science.gov (United States) Limardo, Jose G.; Allen, Christopher S.; Danielson, Richard W. 2014-01-01 Crewmembers onboard the International Space Station (ISS) live in a unique workplace environment for as long as 6 -12 months. During these long-duration ISS missions, noise exposures from onboard equipment are posing concerns for human factors and crewmember health risks, such as possible reductions in hearing sensitivity, disruptions of crew sleep, interference with speech intelligibility and voice communications, interference with crew task performance, and reduced alarm audibility. The purpose of this poster is to describe how a recently-updated noise constraints flight rule is being used to implement a NASA-created Noise Exposure Estimation Tool and Noise Hazard Inventory to predict crew noise exposures and recommend when hearing protection devices are needed. 4. Octafluoropropane Concentration Dynamics on Board the International Space Station Science.gov (United States) Perry, J. L. 2003-01-01 Since activating the International Space Station s (IS9 Service Module in November 2000, archival air quality samples have shown highly variable concentrations of octafluoropropane in the cabin. This variability has been directly linked to leakage from air conditioning systems on board the Service Module, Zvezda. While octafluoro- propane is not highly toxic, it presents a significant chal- lenge to the trace contaminant control systems. A discussion of octafluoropropane concentration dynamics is presented and the ability of on board trace contami- nant control systems to effectively remove octafluoropro- pane from the cabin atmosphere is assessed. Consideration is given to operational and logistics issues that may arise from octafluoropropane and other halo- carbon challenges to the contamination control systems as well as the potential for effecting cabin air quality. 5. Space Station assembly sequence planning - An engineering and operational challenge Science.gov (United States) Kaidy, James T.; Bastedo, William G. 1988-01-01 This paper discusses the Space Station assembly sequence planning and development process. It presents the planning methodologies from both historial and current perspectives. It is shown that planning the assembly sequence is a new and unique challenge and its solution requires the simultaneous satisfaction of many diverse variables and constants. The considerations which influence the development of the assembly sequence include launch vehicle integration and lift capabilities, on-orbit assembly flight operations, vehicle flight dynamics, spacecraft system capabilities and resource availability. Many of these considerations are described in this paper. In addition, the examples presented demonstrate the current process for assembly sequence planning and show many of the complex trade-offs that must be performed. 6. Space Station CMIF extended duration metabolic control test Science.gov (United States) Schunk, Richard G.; Bagdigian, Robert M.; Carrasquillo, Robyn L.; Ogle, Kathryn Y.; Wieland, Paul O. 1989-01-01 The Space Station Extended Duration Metabolic Control Test (EMCT) was conducted at the MSFC Core Module Integration Facility. The primary objective of the EMCT was to gather performance data from a partially-closed regenerative Environmental Control and Life Support (ECLS) system functioning under steady-state conditions. Included is a description of the EMCT configuration, a summary of events, a discussion of anomalies that occurred during the test, and detailed results and analysis from individual measurements of water and gas samples taken during the test. A comparison of the physical, chemical, and microbiological methods used in the post test laboratory analyses of the water samples is included. The preprototype ECLS hardware used in the test, providing an overall process description and theory of operation for each hardware item. Analytical results pertaining to a system level mass balance and selected system power estimates are also included. 7. X-38 research aircraft launch from Space Station - computer animation Science.gov (United States) 1997-01-01 In the mid-1990's researchers at the NASA Dryden Flight Research Center, Edwards, California, and Johnson Space Center in Houston, Texas, began working actively with the sub-scale X-38 prototype crew return vehicle (CRV). This was an unpiloted lifting body designed at 80 percent of the size of a projected emergency crew return vehicle for the International Space Station. The X-38 and the actual CRV are patterned after a lifting-body shape first employed in the Air Force X-23 (SV-5) program in the mid-1960's and the Air Force-NASA X-24A lifting-body project in the early to mid-1970's. Built by Scaled Composites, Inc., in Mojave, CA, and outfitted with avionics, computer systems, and other hardware at Johnson Space Center, two X-38 aircraft were involved in flight research at Dryden beginning in July of 1997. Before that, however, Dryden conducted some 13 flights at a drop zone near California City, California. These tests were done with a 1/6-scale model of the X-38 aircraft to test the parafoil concept that would be employed on the X-38 and the actual CRV. The basic concept is that the actual CRV will use an inertial navigation system together with the Global Positioning System of satellites to guide it from the International Space Station into the earth's atmosphere. A deorbit engine module will redirect the vehicle from orbit into the atmosphere where a series of parachutes and a parafoil will deploy in sequence to bring the vehicle to a landing, possibly in a field next to a hospital. Flight research at NASA Dryden for the X-38 began with an unpiloted captive carry flight in which the vehicle remained attached to its future launch vehicle, the Dryden B-52 008. There were four captive flights in 1997 and three in 1998, plus the first drop test on March 12, 1998, using the parachutes and parafoil. Further captive and drop tests occurred in 1999. Although the X-38 landed safely on the lakebed at Edwards after the March 1998 drop test, there had been some problems 8. Functional decor in the International Space Station: Body orientation cues and picture perception Science.gov (United States) Coss, Richard G.; Clearwater, Yvonne A.; Barbour, Christopher G.; Towers, Steven R. 1989-01-01 Subjective reports of American astronauts and their Soviet counterparts suggest that homogeneous, often symmetrical, spacecraft interiors can contribute to motion sickness during the earliest phase of a mission and can also engender boredom. Two studies investigated the functional aspects of Space Station interior aesthetics. One experiment examined differential color brightnesses as body orientation cues; the other involved a large survey of photographs and paintings that might enhance the interior aesthetics of the proposed International Space Station. Ninety male and female college students reclining on their backs in the dark were disoriented by a rotating platform and inserted under a slowly rotating disk that filled their entire visual field. The entire disk was painted the same color but one half had a brightness value that was about 69 percent higher than the other. The effects of red, blue, and yellow were examined. Subjects wearing frosted goggles opened their eyes to view the rotating, illuminated disk, which was stopped when they felt that they were right-side up. For all three colors, significant numbers of subjects said they felt right-side up when the brighter side of the disk filled their upper visual field. These results suggest that color brightness could provide Space Station crew members with body orientation cues as they move about. It was found that subjects preferred photographs and paintings with the greatest depths of field, irrespective of picture topic. 9. Space Acceleration Measurement System-II: Microgravity Instrumentation for the International Space Station Research Community Science.gov (United States) Sutliff, Thomas J. 1999-01-01 The International Space Station opens for business in the year 2000, and with the opening, science investigations will take advantage of the unique conditions it provides as an on-orbit laboratory for research. With initiation of scientific studies comes a need to understand the environment present during research. The Space Acceleration Measurement System-II provides researchers a consistent means to understand the vibratory conditions present during experimentation on the International Space Station. The Space Acceleration Measurement System-II, or SAMS-II, detects vibrations present while the space station is operating. SAMS-II on-orbit hardware is comprised of two basic building block elements: a centralized control unit and multiple Remote Triaxial Sensors deployed to measure the acceleration environment at the point of scientific research, generally within a research rack. Ground Operations Equipment is deployed to complete the command, control and data telemetry elements of the SAMS-II implementation. Initially, operations consist of user requirements development, measurement sensor deployment and use, and data recovery on the ground. Future system enhancements will provide additional user functionality and support more simultaneous users. 10. CO2 on the International Space Station: An Operations Update Science.gov (United States) Law, Jennifer; Alexander, David 2016-01-01 PROBLEM STATEMENT: We describe CO2 symptoms that have been reported recently by crewmembers on the International Space Station and our continuing efforts to control CO2 to lower levels than historically accepted. BACKGROUND: Throughout the International Space Station (ISS) program, anecdotal reports have suggested that crewmembers develop CO2-related symptoms at lower CO2 levels than would be expected terrestrially. Since 2010, operational limits have controlled the 24-hour average CO2 to 4.0 mm Hg, or below as driven by crew symptomatology. In recent years, largely due to increasing awareness by crew and ground team, there have been increased reports of crew symptoms. The aim of this presentation is to discuss recent observations and operational impacts to lower CO2 levels on the ISS. CASE PRESENTATION: Crewmembers are routinely asked about CO2 symptoms in their weekly private medical conferences with their crew surgeons. In recent ISS expeditions, crewmembers have noted symptoms attributable to CO2 starting at 2.3 mmHg. Between 2.3 - 2.7 mm Hg, fatigue and full-headedness have been reported. Between 2.7 - 3.0 mm Hg, there have been self-reports of procedure missed steps or procedures going long. Above 3.0 - 3.4 mm Hg, headaches have been reported. A wide range of inter- and intra-individual variability in sensitivity to CO2 have been noted. OPERATIONAL / CLINICAL RELEVANCE: These preliminary data provide semi-quantitative ranges that have been used to inform a new operational limit of 3.0 mmHg as a compromise between systems capabilities and the recognition that there are human health and performance impacts at recent ISS CO2 levels. Current evidence would suggest that an operational limit between 0.5 and 2.0 mm Hg may maintain health and performance. Future work is needed to establish long-term ISS and future vehicle operational limits. 11. Benefits of International Collaboration on the International Space Station Science.gov (United States) Hasbrook, Pete; Robinson, Julie A.; Brown Tate, Judy; Thumm, Tracy; Cohen, Luchino; Marcil, Isabelle; De Parolis, Lina; Hatton, Jason; Umezawa, Kazuo; Shirakawa, Masaki; 2017-01-01 The International Space Station is a valuable platform for research in space, but the benefits are limited if research is only conducted by individual countries. Through the efforts of the ISS Program Science Forum, international science working groups, and interagency cooperation, international collaboration on the ISS has expanded as ISS utilization has matured. Members of science teams benefit from working with counterparts in other countries. Scientists and institutions bring years of experience and specialized expertise to collaborative investigations, leading to new perspectives and approaches to scientific challenges. Combining new ideas and historical results brings synergy and improved peer-reviewed scientific methods and results. World-class research facilities can be expensive and logistically complicated, jeopardizing their full utilization. Experiments that would be prohibitively expensive for a single country can be achieved through contributions of resources from two or more countries, such as crew time, up- and downmass, and experiment hardware. Cooperation also avoids duplication of experiments and hardware among agencies. Biomedical experiments can be completed earlier if astronauts or cosmonauts from multiple agencies participate. Countries responding to natural disasters benefit from ISS imagery assets, even if the country has no space agency of its own. Students around the world participate in ISS educational opportunities, and work with students in other countries, through open curriculum packages and through international competitions. Even experiments conducted by a single country can benefit scientists around the world, through specimen sharing programs and publicly accessible "open data" repositories. For ISS data, these repositories include GeneLab and the Physical Science Informatics System. Scientists can conduct new research using ISS data without having to launch and execute their own experiments. Multilateral collections of research 12. Science.gov (United States) Love, John; Cooley, Vic 2016-01-01 The emerging field of Translational Research aims to coalesce interdisciplinary findings from basic science for biomedical applications. To complement spaceflight research using human subjects, translational studies can be designed to address aspects of space-related human health risks and help develop countermeasures to prevent or mitigate them, with therapeutical benefits for analogous conditions experienced on Earth. Translational research with cells and model organisms is being conducted onboard the International Space Station (ISS) in connection with various human systems impacted by spaceflight, such as the cardiovascular, musculoskeletal, and immune systems. Examples of recent cell-based translational investigations on the ISS include the following. The JAXA investigation Cell Mechanosensing seeks to identify gravity sensors in skeletal muscle cells to develop muscle atrophy countermeasures by analyzing tension fluctuations in the plasma membrane, which changes the expression of key proteins and genes. Earth applications of this study include therapeutic approaches for some forms of muscular dystrophy, which appear to parallel aspects of muscle wasting in space. Spheroids is an ESA investigation examining the system of endothelial cells lining the inner surface of all blood vessels in terms of vessel formation, cellular proliferation, and programmed cell death, because injury to the endothelium has been implicated as underpinning various cardiovascular and musculoskeletal problems arising during spaceflight. Since endothelial cells are involved in the functional integrity of the vascular wall, this research has applications to Earth diseases such as atherosclerosis, diabetes, and hypertension. The goal of the T-Cell Activation in Aging NASA investigation is to understand human immune system depression in microgravity by identifying gene expression patterns of candidate molecular regulators, which will provide further insight into factors that may play a 13. Engineering a Live UHD Program from the International Space Station Science.gov (United States) Grubbs, Rodney; George, Sandy 2017-01-01 The first-ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a “Super Session” at the National Association of Broadcasters (NAB) Show in April 2017. Ultra-High Definition is four times the resolution of “full HD” or “1080P” video. Also referred to as “4K”, the Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. A similar demonstration was conducted in 2006 with the Discovery Channel to demonstrate the ability to stream HDTV from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a “live” event was staged when the UHD video coming from the ISS had a latency of 10+ seconds. In addition, the paper will touch on the unique collaboration between the inherently governmental aspects of the ISS, commercial partners Amazon and Elemental, and the National Association of Broadcasters. 14. Gravitational Biology Facility on Space Station: Meeting the needs of space biology Science.gov (United States) 1992-01-01 The Gravitational Biology Facility (GBF) is a set of generic laboratory equipment needed to conduct research on Space Station Freedom (SSF), focusing on Space Biology Program science (Cell and Developmental Biology and Plant Biology). The GBF will be functional from the earliest utilization flights through the permanent manned phase. Gravitational biology research will also make use of other Life Sciences equipment on the space station as well as existing equipment developed for the space shuttle. The facility equipment will be developed based on requirements derived from experiments proposed by the scientific community to address critical questions in the Space Biology Program. This requires that the facility have the ability to house a wide variety of species, various methods of observation, and numerous methods of sample collection, preservation, and storage. The selection of the equipment will be done by the members of a scientific working group (5 members representing cell biology, 6 developmental biology, and 6 plant biology) who also provide requirements to design engineers to ensure that the equipment will meet scientific needs. All equipment will undergo extensive ground based experimental validation studies by various investigators addressing a variety of experimental questions. Equipment will be designed to be adaptable to other space platforms. The theme of the Gravitational Biology Facility effort is to provide optimal and reliable equipment to answer the critical questions in Space Biology as to the effects of gravity on living systems. 15. Astrobee: Developing a Free Flying Robot for the International Space Station Science.gov (United States) Bualat, Maria; Barlow, Jonathan; Fong, Terrence; Provencher, Christopher; Smith, Trey; Zuniga, Allison 2015-01-01 Astronaut time will always be in short supply, consumables (e.g., oxygen) will always be limited, and some work will not be feasible, or productive, for astronauts to do manually. Free flyers offer significant potential to perform a great variety of tasks, include routine, repetitive or simple but long-duration work, such as conducting environment surveys, taking sensor readings or monitoring crew activities. The "Astrobee" project is developing a new free flying robot system suitable for performing Intravehicular Activity (IVA) work on the International Space Station (ISS). This paper will describe the Astrobee project objectives, initial design, concept of operations, and key challenges. 16. National Geodetic Survey (NGS) Geodetic Control Stations, (Horizontal and/or Vertical Control), March 2009 Data.gov (United States) Earth Data Analysis Center, University of New Mexico — This data contains a set of geodetic control stations maintained by the National Geodetic Survey. Each geodetic control station in this dataset has either a precise... 17. On developing the local research environment of the 1990s - The Space Station era Science.gov (United States) Chase, Robert; Ziel, Fred 1989-01-01 A requirements analysis for the Space Station's polar platform data system has been performed. Based upon this analysis, a cluster, layered cluster, and layered-modular implementation of one specific module within the Eos Data and Information System (EosDIS), an active data base for satellite remote sensing research has been developed. It is found that a distributed system based on a layered-modular architecture and employing current generation work station technologies has the requisite attributes ascribed by the remote sensing research community. Although, based on benchmark testing, probabilistic analysis, failure analysis and user-survey technique analysis, it is found that this architecture presents some operational shortcomings that will not be alleviated with new hardware or software developments. Consequently, the potential of a fully-modular layered architectural design for meeting the needs of Eos researchers has also been evaluated, concluding that it would be well suited to the evolving requirements of this multidisciplinary research community. 18. Functional Testing of the Space Station Plasma Contactor Science.gov (United States) Patterson, Michael J.; Hamley, John A.; Sarver-Verhey, Timothy R.; Soulas, George C. 1995-01-01 A plasma contactor system has been baselined for the International Space Station Alpha (ISSA) to control the electrical potentials of surfaces to eliminate/mitigate damaging interactions with the space environment. The system represents a dual-use technology which is a direct outgrowth of the NASA electric propulsion program and, in particular, the technology development effort on ion thruster systems. The plasma contactor subsystems include a hollow cathode assembly, a power electronics unit, and an expellant management unit. Under a pre-flight development program these subsystems are being developed to the level of maturity appropriate for transfer to U.S. industry for final development. Development efforts for the hollow cathode assembly include design selection and refinement, validating its required lifetime, and quantifying the cathode performance and interface specifications. To date, cathode components have demonstrated over 10,000 hours lifetime, and a hollow cathode assembly has demonstrated over 3,000 ignitions. Additionally, preliminary integration testing of a hollow cathode assembly with a breadboard power electronics unit has been completed. This paper discusses test results and the development status of the plasma contactor subsystems for ISSA, and in particular, the hollow cathode assembly. 19. Growth Chambers on the International Space Station for Large Plants Science.gov (United States) Massa, Gioia D.; Wheeler, Raymond M.; Morrow, Robert C.; Levine, Howard G. 2016-01-01 The International Space Station (ISS) now has platforms for conducting research on horticultural plant species under LED (Light Emitting Diodes) lighting, and those capabilities continue to expand. The Veggie vegetable production system was deployed to the ISS as an applied research platform for food production in space. Veggie is capable of growing a wide array of horticultural crops. It was designed for low power usage, low launch mass and stowage volume, and minimal crew time requirements. The Veggie flight hardware consists of a light cap containing red (630 nanometers), blue, (455 nanometers) and green (530 nanometers) LEDs. Interfacing with the light cap is an extendable bellowsbaseplate for enclosing the plant canopy. A second large plant growth chamber, the Advanced Plant Habitat (APH), is will fly to the ISS in 2017. APH will be a fully controllable environment for high-quality plant physiological research. APH will control light (quality, level, and timing), temperature, CO2, relative humidity, and irrigation, while scrubbing any cabin or plant-derived ethylene and other volatile organic compounds. Additional capabilities include sensing of leaf temperature and root zone moisture, root zone temperature, and oxygen concentration. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs (4100K). There will be several internal cameras (visible and IR) to monitor and record plant growth and operations. Veggie and APH are available for research proposals. 20. Behavioral Adaptations of Female Mice on the International Space Station Science.gov (United States) Strieter, I.; Moyer, E. L.; Lowe, M.; Choi, S.; Gong, C.; Cadena, Sam; Stodieck, Louis; Globus, R. K.; Ronca, A. E. 2017-01-01 Adult female mice were sent to the International Space Station (ISS) as part of an early life science mission utilizing NASA's Rodent Habitat. Its primary purpose was to provide further insight into the influence of a microgravity environment on various aspects of mammalian physiology and well-being as part of an ongoing program of research aimed ultimately at understanding and ameliorating the deleterious influences of space on the human body. The present study took advantage of video collected from fixed, in-flight cameras within the habitat itself, to assess behavioral adaptations observed among in-flight mice aboard the ISS and differences in behavior with respect to a control group on the ground. Data collection consisted of several behavioral measures recorded by a trained observer with the assistance of interactive behavior analysis software. Specific behavioral measures included frequencies of conspecific interactionsociability, time spent feeding and conducting hygienic behavior, and relative durations of thigmotactic behavior, which is commonly used as an index of anxiety. Data were used to test tentative hypotheses that such behaviors differ significantly across mice under microgravity versus 1g conditions, and the assumption that the novel experience of microgravity itself may represent an initially anxiogenic stimulus which an animal will eventually acclimate to, perhaps through habituation. 1. The International Space Station Research Opportunities and Accomplishments Science.gov (United States) Alleyne, Camille W. 2011-01-01 In 2010, the International Space Station (ISS) construction and assembly was completed to become a world-class scientific research laboratory. We are now in the era of utilization of this unique platform that facilitates ground-breaking research in the microgravity environment. There are opportunities for NASA-funded research; research funded under the auspice of the United States National Laboratory; and research funded by the International Partners - Japan, Europe, Russia and Canada. The ISS facilities offer an opportunity to conduct research in a multitude of disciplines such as biology and biotechnology, physical science, human research, technology demonstration and development; and earth and space science. The ISS is also a unique resource for educational activities that serve to motivate and inspire students to pursue careers in Science, Technology, Engineering and Mathematics. Even though we have just commenced full utilization of the ISS as a science laboratory, early investigations are yielding major results that are leading to such things as vaccine development, improved cancer drug delivery methods and treatment for debilitating diseases, such as Duchenne's Muscular Dystrophy. This paper 2. Development of Test Protocols for International Space Station Particulate Filters Science.gov (United States) Green, Robert D.; Vijayakumar, R.; Agui, Juan H. 2014-01-01 Air quality control on the International Space Station (ISS) is a vital requirement for maintaining a clean environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of gravitational settling. The ISS Environmental Control and Life Support System (ECLSS) uses a filtration system that has been in use for over 14 years and has proven to meet this challenge. The heart of this system is a traditional High- Efficiency Particulate Air (HEPA) filter configured to interface with the rest of the life support elements and provide effective cabin filtration. Over the years, the service life of these filters has been re-evaluated based on limited post-flight tests of returned filters and risk factors. On earth, a well designed and installed HEPA filter will last for several years, e.g. in industrial and research clean room applications. Test methods for evaluating these filters are being developed on the basis of established test protocols used by the industry and the military. This paper will discuss the test methods adopted and test results on prototypes of the ISS filters. The results will assist in establishing whether the service life can be extended for these filters. Results from unused filters that have been in storage will also be presented to ascertain the shelf life and performance deterioration, if any and determine if the shelf life may be extended. 3. Gene expression from plants grown on the International Space Station Science.gov (United States) Stimpson, Alexander; Pereira, Rhea; Kiss, John Z.; Correll, Melanie Three experiments were performed on the International Space Station (ISS) in 2006 as part of the TROPI experiments. These experiments were performed to study graviTROPIsm and photoTROPIsm responses of Arabidopsis in microgravity (µg). Seedlings were grown with a variety of light and gravitational treatments for approximately five days. The frozen samples were returned to Earth during three space shuttle missions in 2007 and stored at -80° C. Due to the limited amount of plant biomass returned, new protocols were developed to minimize the amount of material needed for RNA extraction as a preparation for microarray analysis. Using these new protocols, RNA was extracted from several sets of seedlings grown in red light followed by blue light with one sample from 1.0g treatment and the other at µg. Using a 2-fold change criterion, microarray (Affymetrix, GeneChip) results showed that 613 genes were upregulated in the µg sample while 757 genes were downregulated. Upregulated genes in response to µg included transcription factors from the WRKY (15 genes), MYB (3) and ZF (8) families as well as those that are involved in auxin responses (10). Downregulated genes also included transcription factors such as MYB (5) and Zinc finger (10) but interestingly only two WRKY family genes were down-regulated during the µg treatment. Studies are underway to compare these results with other samples to identify the genes involved in the gravity and light signal transduction pathways (this project is Supported By: NASA NCC2-1200). 4. Psychological Selection of NASA Astronauts for International Space Station Missions Science.gov (United States) Galarza, Laura 1999-01-01 During the upcoming manned International Space Station (ISS) missions, astronauts will encounter the unique conditions of living and working with a multicultural crew in a confined and isolated space environment. The environmental, social, and mission-related challenges of these missions will require crewmembers to emphasize effective teamwork, leadership, group living and self-management to maintain the morale and productivity of the crew. The need for crew members to possess and display skills and behaviors needed for successful adaptability to ISS missions led us to upgrade the tools and procedures we use for astronaut selection. The upgraded tools include personality and biographical data measures. Content and construct-related validation techniques were used to link upgraded selection tools to critical skills needed for ISS missions. The results of these validation efforts showed that various personality and biographical data variables are related to expert and interview ratings of critical ISS skills. Upgraded and planned selection tools better address the critical skills, demands, and working conditions of ISS missions and facilitate the selection of astronauts who will more easily cope and adapt to ISS flights. 5. ISSLIVE! Bringing the Space Station to Every Generation Science.gov (United States) Harris, Philip D.; Price, Jennifer B.; Severance, Mark; Blue, Regina; Khan, Ahmed; Healy, Matthew D.; Ehlinger, Jesse B. 2011-01-01 6. Space station needs, attributes and architectural options. Volume 3, attachment 1, task 1: Mission requirements Science.gov (United States) 1983-01-01 The development and systems architectural requirements of the space station program are described. The system design is determined by user requirements. Investigated topics include physical and life science experiments, commercial utilization, U.S. national security, and remote space operations. The economic impact of the space station program is analyzed. 7. 78 FR 66964 - International Space Station National Laboratory Advisory Committee; Charter Renewal Science.gov (United States) 2013-11-07 ... Committee is in the public interest in connection with the performance of duties imposed on NASA by law. The... SPACE ADMINISTRATION International Space Station National Laboratory Advisory Committee; Charter Renewal... the International Space Station National Laboratory Advisory Committee. SUMMARY: Pursuant to sections... 8. International Space Station Aeromedical Support in Star City, Russia Science.gov (United States) Cole, Richard; Chamberlin, Blake; Dowell, Gene; Castleberry, Tarah; Savage, Scott 2010-01-01 The Space Medicine Division at Johnson Space Center works with the International Space Station s international partners (IP) to accomplish assigned health care tasks. Each IP may assign a flight surgeon to support their assigned crewmembers during all phases of training, in-flight operations, and postflight activities. Because of the extensive amount of astronaut training conducted in Star City; NASA, in collaboration with its IPs, has elected to keep a flight surgeon assigned to NASA s Star City office to provide support to the U.S., Canadian, Japanese, and European astronauts during hazardous training activities and provide support for any contingency landings of Soyuz spacecraft in Kazakhstan. The physician also provides support as necessary to the Mission Control Center in Moscow for non-Russian crew-related activities. In addition, the physician in Star City provides ambulatory medical care to the non-Russian-assigned personnel in Star City and visiting dependents. Additional work involves all medical supplies, administration, and inventory. The Star City physician assists in medical evacuation and/or in obtaining support from western clinics in Moscow when required care exceeds local resources. Overall, the Russians are responsible for operations and the medical care of the entire crew when training in Star City and during launch/landing operations. However, they allow international partner flight surgeons to care for their crewmembers as agreed to in the ISS Medical Operations Requirements Document. Medical support focuses on pressurized, monitored, and other hazardous training activities. One of the most important jobs is to act as a medical advocate for the astronauts and to reduce the threat that these hazardous activities pose. Although the Russians have a robust medical system, evacuation may be needed to facilitate ongoing medical care. There are several international medical evacuation companies that provide this care. 9. Interplanetary Transit Simulations Using the International Space Station Science.gov (United States) Charles, J. B.; Arya, Maneesh 2010-01-01 It has been suggested that the International Space Station (ISS) be utilized to simulate the transit portion of long-duration missions to Mars and near-Earth asteroids (NEA). The ISS offers a unique environment for such simulations, providing researchers with a high-fidelity platform to study, enhance, and validate technologies and countermeasures for these long-duration missions. From a space life sciences perspective, two major categories of human research activities have been identified that will harness the various capabilities of the ISS during the proposed simulations. The first category includes studies that require the use of the ISS, typically because of the need for prolonged weightlessness. The ISS is currently the only available platform capable of providing researchers with access to a weightless environment over an extended duration. In addition, the ISS offers high fidelity for other fundamental space environmental factors, such as isolation, distance, and accessibility. The second category includes studies that do not require use of the ISS in the strictest sense, but can exploit its use to maximize their scientific return more efficiently and productively than in ground-based simulations. In addition to conducting Mars and NEA simulations on the ISS, increasing the current increment duration on the ISS from 6 months to a longer duration will provide opportunities for enhanced and focused research relevant to long-duration Mars and NEA missions. Although it is currently believed that increasing the ISS crew increment duration to 9 or even 12 months will pose little additional risk to crewmembers, additional medical monitoring capabilities may be required beyond those currently used for the ISS operations. The use of the ISS to simulate aspects of Mars and NEA missions seems practical, and it is recommended that planning begin soon, in close consultation with all international partners. 10. International Space Station Data Collection for Disaster Response Science.gov (United States) Stefanov, William L.; Evans, Cynthia A. 2015-01-01 Remotely sensed data acquired by orbital sensor systems has emerged as a vital tool to identify the extent of damage resulting from a natural disaster, as well as providing near-real time mapping support to response efforts on the ground and humanitarian aid efforts. The International Space Station (ISS) is a unique terrestrial remote sensing platform for acquiring disaster response imagery. Unlike automated remote-sensing platforms it has a human crew; is equipped with both internal and externally-mounted remote sensing instruments; and has an inclined, low-Earth orbit that provides variable views and lighting (day and night) over 95 percent of the inhabited surface of the Earth. As such, it provides a useful complement to autonomous sensor systems in higher altitude polar orbits. NASA remote sensing assets on the station began collecting International Disaster Charter (IDC) response data in May 2012. The initial NASA ISS sensor systems responding to IDC activations included the ISS Agricultural Camera (ISSAC), mounted in the Window Observational Research Facility (WORF); the Crew Earth Observations (CEO) Facility, where the crew collects imagery using off-the-shelf handheld digital cameras; and the Hyperspectral Imager for the Coastal Ocean (HICO), a visible to near-infrared system mounted externally on the Japan Experiment Module Exposed Facility. The ISSAC completed its primary mission in January 2013. It was replaced by the very high resolution ISS SERVIR Environmental Research and Visualization System (ISERV) Pathfinder, a visible-wavelength digital camera, telescope, and pointing system. Since the start of IDC response in 2012 there have been 108 IDC activations; NASA sensor systems have collected data for thirty-two of these events. Of the successful data collections, eight involved two or more ISS sensor systems responding to the same event. Data has also been collected by International Partners in response to natural disasters, most notably JAXA and 11. The International Space Station as a test platform for evaluating robot mobility in a microgravity environment Science.gov (United States) Baum, Christopher H. 2000-01-01 The International Space Station is the largest space construction project attempted to date. The station will also be permanently staffed for years. In order to build and work in the station, humans must work with a new generation of space-born robots. Robotics has a strong history of using competition to further research. The Destiny laboratory offers a unique environment to stage a competition to advance robotic science in this area. . 12. Centrifuge Facility for the International Space Station Alpha Science.gov (United States) Johnson, Catherine C.; Hargens, Alan R. 1994-01-01 The Centrifuge Facility planned for the International Space Station Alpha has under-one considerable redesign over the past year, primarily because the Station is now viewed as a 10 year mission rather than a 30 year mission and because of the need to simply the design to meet budget constraints and a 2000 launch date. The basic elements of the Centrifuge Facility remain the same, i.e., a 2.5 m diameter centrifuge, a micro-g holding unit, plant and animal habitats, a glovebox and a service unit. The centrifuge will still provide the full range of artificial gravity from 0.01 a to 2 - as originally planned; however, the extractor to permit withdrawal of habitats from the centrifuge without stopping the centrifuge has been eliminated. The specimen habitats have also been simplified and are derived from other NASA programs. The Plant Research Unit being developed by the Gravitational Biology Facility will be used to house plants in the Centrifuge Facility. Although not as ambitious as the Centrifuge Facility plant habitat, it will provide much better environmental control and lighting than the current Shuttle based Plant Growth Facility. Similarly, rodents will be housed in the Advanced Animal Habitat being developed for the Shuttle program. The Centrifuge Facility and ISSA will provide the opportunity to perform repeatable, high quality science. The long duration increments available on the Station will permit multigeneration studies on both plants and animals which have not previously been possible. The Centrifuge Facility will accommodate sufficient number of specimens to permit statistically significant sampling of specimens to investigate the time course of adaptation to altered gravity environments. The centrifuge will for the first time permit investigators to use gravity itself as a tool to investigate fundamental processes, to investigate the intensity and duration of gravity to maintain normal structure and function, to separate the effects of micro-g from 13. Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station Science.gov (United States) Reinhart, Richard C.; Lux, James P. 2014-01-01 14. Space Station Systems Technology Study. Volume 2: Trade study and technology selection technical report Science.gov (United States) 1984-01-01 High leverage technologies are examined for application to the space station. The areas under investigation include attitude control, data management, long life thermal management, and automated housekeeping integration. 15. Research on the International Space Station - An Overview Science.gov (United States) Evans, Cynthia A.; Robinson, Julie A.; Tate-Brown, Judy M. 2009-01-01 The International Space Station (ISS) celebrates ten years of operations in 2008. While the station did not support permanent human crews during the first two years of operations November 1998 to November 2000 it hosted a few early science experiments months before the first international crew took up residence. Since that time and simultaneous with the complicated task of ISS construction and overcoming impacts from the tragic Columbia accident science returns from the ISS have been growing at a steady pace. As of this writing, over 162 experiments have been operated on the ISS, supporting research for hundreds of ground-based investigators from the U.S. and international partners. This report summarizes the experimental results collected to date. Today, NASA's priorities for research aboard the ISS center on understanding human health during long-duration missions, researching effective countermeasures for long-duration crewmembers, and researching and testing new technologies that can be used for future exploration crews and spacecraft. Through the U.S. National Laboratory designation, the ISS is also a platform available to other government agencies. Research on ISS supports new understandings, methods or applications relevant to life on Earth, such as understanding effective protocols to protect against loss of bone density or better methods for producing stronger metal alloys. Experiment results have already been used in applications as diverse as the manufacture of solar cell and insulation materials for new spacecraft and the verification of complex numerical models for behavior of fluids in fuel tanks. A synoptic publication of these results will be forthcoming in 2009. At the 10-year point, the scientific returns from ISS should increase at a rapid pace. During the 2008 calendar year, the laboratory space and research facilities were tripled with the addition of ESA's Columbus and JAXA's Kibo scientific modules joining NASA's Destiny Laboratory. All three 16. Improving Safety on the International Space Station: Transitioning to Electronic Emergency Procedure Books on the International Space Station Science.gov (United States) Carter-Journet, Katrina; Clahoun, Jessica; Morrow, Jason; Duncan, Gary 2012-01-01 17. Crew Restraint Design for the International Space Station Science.gov (United States) Norris, Lena; Holden, Kritina; Whitmore, Mihriban 2006-01-01 18. International Space Station Instmments Collect Imagery of Natural Disasters Science.gov (United States) Evans, C. A.; Stefanov, W. L. 2013-01-01 A new focus for utilization of the International Space Station (ISS) is conducting basic and applied research that directly benefits Earth's citizenry. In the Earth Sciences, one such activity is collecting remotely sensed imagery of disaster areas and making those data immediately available through the USGS Hazards Data Distribution System, especially in response to activations of the International Charter for Space and Major Disasters (known informally as the "International Disaster Charter", or IDC). The ISS, together with other NASA orbital sensor assets, responds to IDC activations following notification by the USGS. Most of the activations are due to natural hazard events, including large floods, impacts of tropical systems, major fires, and volcanic eruptions and earthquakes. Through the ISS Program Science Office, we coordinate with ISS instrument teams for image acquisition using several imaging systems. As of 1 August 2013, we have successfully contributed imagery data in support of 14 Disaster Charter Activations, including regions in both Haiti and the east coast of the US impacted by Hurricane Sandy; flooding events in Russia, Mozambique, India, Germany and western Africa; and forest fires in Algeria and Ecuador. ISS-based sensors contributing data include the Hyperspectral Imager for the Coastal Ocean (HICO), the ISERV (ISS SERVIR Environmental Research and Visualization System) Pathfinder camera mounted in the US Window Observational Research Facility (WORF), the ISS Agricultural Camera (ISSAC), formerly operating from the WORF, and high resolution handheld camera photography collected by crew members (Crew Earth Observations). When orbital parameters and operations support data collection, ISS-based imagery adds to the resources available to disaster response teams and contributes to the publicdomain record of these events for later analyses. 19. International Space Station Potable Water Characterization for 2013 Science.gov (United States) Straub, John E., II; Plumlee, Debrah K.; Schultz, John R.; Mudgett, Paul D. 2014-01-01 In this post-construction, operational phase of International Space Station (ISS) with an ever-increasing emphasis on its use as a test-bed for future exploration missions, the ISS crews continue to rely on water reclamation systems for the majority of their water needs. The onboard water supplies include U.S. Segment potable water from humidity condensate and urine, Russian Segment potable water from condensate, and ground-supplied potable water, as reserve. In 2013, the cargo returned on the Soyuz 32-35 flights included archival potable water samples collected from Expeditions 34-37. The former Water and Food Analytical Laboratory (now Toxicology and Evironmental Chemistry Laboratory) at the NASA Johnson Space Center continued its long-standing role of performing chemical analyses on ISS return water samples to verify compliance with potable water quality specifications. This paper presents and discusses the analytical results for potable water samples returned from Expeditions 34-37, including a comparison to ISS quality standards. During the summer of 2013, the U.S. Segment potable water experienced a third temporary rise and fall in total organic carbon (TOC) content, as the result of organic contamination breaking through the water system's treatment process. Analytical results for the Expedition 36 archival samples returned on Soyuz 34 confirmed that dimethylsilanediol was once again the responsible contaminant, just as it was for the previous comparable TOC rises in 2010 and 2012. Discussion herein includes the use of the in-flight total organic carbon analyzer (TOCA) as a key monitoring tool for tracking these TOC rises and scheduling appropriate remediation. 20. Unexpected Control Structure Interaction on International Space Station Science.gov (United States) Gomez, Susan F.; Platonov, Valery; Medina, Elizabeth A.; Borisenko, Alexander; Bogachev, Alexey 2017-01-01 On June 23, 2011, the International Space Station (ISS) was performing a routine 180 degree yaw maneuver in support of a Russian vehicle docking when the on board Russian Segment (RS) software unexpectedly declared two attitude thrusters failed and switched thruster configurations in response to unanticipated ISS dynamic motion. Flight data analysis after the maneuver indicated that higher than predicted structural loads had been induced at various locations on the United States (U.S.) segment of the ISS. Further analysis revealed that the attitude control system was firing thrusters in response to both structural flex and rigid body rates, which resonated the structure and caused high loads and fatigue cycles. It was later determined that the thruster themselves were healthy. The RS software logic, which was intended to react to thruster failures, had instead been heavily influenced by interaction between the control system and structural flex. This paper will discuss the technical aspects of the control structure interaction problem that led to the RS control system firing thrusters in response to structural flex, the factors that led to insufficient preflight analysis of the thruster firings, and the ramifications the event had on the ISS. An immediate consequence included limiting which thrusters could be used for attitude control. This complicated the planning of on-orbit thruster events and necessitated the use of suboptimal thruster configurations that increased propellant usage and caused thruster lifetime usage concerns. In addition to the technical aspects of the problem, the team dynamics and communication shortcomings that led to such an event happening in an environment where extensive analysis is performed in support of human space flight will also be examined. Finally, the technical solution will be presented, which required a multidisciplinary effort between the U.S. and Russian control system engineers and loads and dynamics structural engineers to 1. Space Station Freedom advanced photovoltaics and battery technology development planning Science.gov (United States) Brender, Karen D.; Cox, Spruce M.; Gates, Mark T.; Verzwyvelt, Scott A. 1993-01-01 Space Station Freedom (SSF) usable electrical power is planned to be built up incrementally during assembly phase to a peak of 75 kW end-of-life (EOL) shortly after Permanently Manned Capability (PMC) is achieved in 1999. This power will be provided by planar silicon (Si) arrays and nickel-hydrogen (NiH2) batteries. The need for power is expected to grow from 75 kW to as much as 150 kW EOL during the evolutionary phase of SSF, with initial increases beginning as early as 2002. Providing this additional power with current technology may not be as cost effective as using advanced technology arrays and batteries expected to develop prior to this evolutionary phase. A six-month study sponsored by NASA Langley Research Center and conducted by Boeing Defense and Space Group was initiated in Aug. 1991. The purpose of the study was to prepare technology development plans for cost effective advanced photovoltaic (PV) and battery technologies with application to SSF growth, SSF upgrade after its arrays and batteries reach the end of their design lives, and other low Earth orbit (LEO) platforms. Study scope was limited to information available in the literature, informal industry contacts, and key representatives from NASA and Boeing involved in PV and battery research and development. Ten battery and 32 PV technologies were examined and their performance estimated for SSF application. Promising technologies were identified based on performance and development risk. Rough order of magnitude cost estimates were prepared for development, fabrication, launch, and operation. Roadmaps were generated describing key issues and development paths for maturing these technologies with focus on SSF application. 2. High-resolution robot tracking and direction finding for space station environment Science.gov (United States) Shahrabi, Kamal 1993-03-01 In the past few years the problem of location finding and tracking of extravehicular robots in space station environment, and the related problem of estimating the parameters of signals in noise, have attracted considerable interest. Conventional direction finding, tracking, and locating techniques such as maximum likelihood (ML) and multiple signal characterization (MUSIC) are proving inadequate to support the full and effective utilization of robotics in a space station environment. The scope of this work is to provide a new and more efficient signal processing technique for a space station robotic tracking system which overcomes existing technical limitations such as radio transmission multipath, station reflections, the number of robots, space station environment, stringent resolution requirements, and space station architecture. In general, this work contains block level design and study of a communication system for a space station involving spread spectrum and digital processing of signal techniques that achieves as space station robotic tracking implementation. This report contains an extensive analysis of the system performance from the following points of view: utilization of a chirp signal, which, in conjunction with a polling procedure, allows for individual robot identification, location and tracking; estimate the number of antennae; determine the location of the antennae on the space station; generate a detailed block diagram design; and perform an overall system analysis that considers the effects of signal multipath. 3. Quantitative Risk Modeling of Fire on the International Space Station Science.gov (United States) Castillo, Theresa; Haught, Megan 2014-01-01 The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program. 4. Hollow cathode heater development for the Space Station plasma contactor Science.gov (United States) Soulas, George C. 1993-01-01 A hollow cathode-based plasma contactor has been selected for use on the Space Station. During the operation of the plasma contactor, the hollow cathode heater will endure approximately 12000 thermal cycles. Since a hollow cathode heater failure would result in a plasma contactor failure, a hollow cathode heater development program was established to produce a reliable heater design. The development program includes the heater design, process documents for both heater fabrication and assembly, and heater testing. The heater design was a modification of a sheathed ion thruster cathode heater. Three heaters have been tested to date using direct current power supplies. Performance testing was conducted to determine input current and power requirements for achieving activation and ignition temperatures, single unit operational repeatability, and unit-to-unit operational repeatability. Comparisons of performance testing data at the ignition input current level for the three heaters show the unit-to-unit repeatability of input power and tube temperature near the cathode tip to be within 3.5 W and 44 degrees C, respectively. Cyclic testing was then conducted to evaluate reliability under thermal cycling. The first heater, although damaged during assembly, completed 5985 ignition cycles before failing. Two additional heaters were subsequently fabricated and have completed 3178 cycles to date in an on-going test. 5. Status: Crewmember Noise Exposures on the International Space Station Science.gov (United States) Limardo-Rodriguez, Jose G.; Allen, Christopher S.; Danielson, Richard W. 2015-01-01 The International Space Station (ISS) provides a unique environment where crewmembers from the US and our international partners work and live for as long as 6 to 12 consecutive months. During these long-durations ISS missions, noise exposures from onboard equipment are posing concerns for human factors and crewmember health risks, such as possible reductions in hearing sensitivity, disruptions of crew sleep, interference with speech intelligibility and voice communications, interference with crew task performance, and reduced alarm audibility. It is crucial to control acoustical noise aboard ISS to acceptable noise exposure levels during the work-time period, and to also provide a restful sleep environment during the sleep-time period. Acoustic dosimeter measurements, obtained when the crewmember wears the dosimeter for 24-hour periods, are conducted onboard ISS every 60 days and compared to ISS flight rules. NASA personnel then assess the acoustic environment to which the crewmembers are exposed, and provide recommendations for hearing protection device usage. The purpose of this paper is to provide an update on the status of ISS noise exposure monitoring and hearing conservation strategies, as well as to summarize assessments of acoustic dosimeter data collected since the Increment 36 mission (April 2013). A description of the updated noise level constraints flight rule, as well as the Noise Exposure Estimation Tool and the Noise Hazard Inventory implementation for predicting crew noise exposures and recommending to ISS crewmembers when hearing protection devices are required, will be described. 6. The Altcriss project on board the International Space Station Science.gov (United States) Casolino, M.; Minori, M.; Picozza, P.; Fuglesang, C.; Galper, A.; Popov, A.; Benghin, V.; Petrov, V.M.; Nagamatsu, A.; Berger, T.; Reitz, G.; Durante, M.; Pugliese, M.; Roca, V.; Sihver, L.; Cucinotta, F.; Semones, E.; Shavers, M.; Guarnieri, V.; Lobascio, C.; Castagnolo, D.; Fortezza, R. The Altcriss project aims perform long term measurement of the radiation environment in different points of the International Space Station. To achieve this goal, it employs an active silicon detector, Sileye-3/Alteino, to monitor nuclei up to Iron in the energy range above 40 MeV/n. Both long term modulation of galactic cosmic rays going toward solar minimum and solar particles events will be observed. A number of different dosimeters are being employed to measure the dose and compare it with the silicon detector data. Another aim of the project is to monitor the effectiveness of shielding materials in orbit: a set of polyethylene tiles is placed in the detector acceptance window and particle flux and composition is compared with measurements in the same locations without shielding. Dosimeters are thus placed behind the shielding material and in an unshielded location to cross-correlate this information. The observation campaign begun in December 2005 and is running continuously ever since. Active and passive data have been retreived at the end of expedition 13, 14 and Astrolab mission. In this work we will describe the experiment and the preliminary results. 7. Status of the International Space Station Waste and Hygiene Compartment Science.gov (United States) Walker, Stephanie; Zahner, Christopher 2010-01-01 The Waste and Hygiene Compartment (WHC) serves as the primary system for removal and containment of metabolic waste and hygiene activities on board the United States segment of the International Space Station (ISS). The WHC was launched on ULF 2 and is currently in the U.S. Laboratory and is integrated into the Water Recovery System (WRS) where pretreated urine is processed by the Urine Processor Assembly (UPA). The waste collection part of the WHC system is derived from the Service Module system and was provided by RSC-Energia along with additional hardware to allow for urine delivery to the UPA. The System has been integrated in an ISS standard equipment rack structure for use on the U.S. segment of the ISS. The system has experienced several events of interest during the deployment, checkout, and operation of the system during its first year of use and these will be covered in this paper. Design and on-orbit performance will also be discussed. 8. International Space Station USOS Waste and Hygiene Compartment Development Science.gov (United States) Link, Dwight E., Jr.; Broyan, James Lee, Jr.; Gelmis, Karen; Philistine, Cynthia; Balistreri, Steven 2007-01-01 The International Space Station (ISS) currently provides human waste collection and hygiene facilities in the Russian Segment Service Module (SM) which supports a three person crew. Additional hardware is planned for the United States Operational Segment (USOS) to support expansion of the crew to six person capability. The additional hardware will be integrated in an ISS standard equipment rack structure that was planned to be installed in the Node 3 element; however, the ISS Program Office recently directed implementation of the rack, or Waste and Hygiene Compartment (WHC), into the U.S. Laboratory element to provide early operational capability. In this configuration, preserved urine from the WHC waste collection system can be processed by the Urine Processor Assembly (UPA) in either the U.S. Lab or Node 3 to recover water for crew consumption or oxygen production. The human waste collection hardware is derived from the Service Module system and is provided by RSC-Energia. This paper describes the concepts, design, and integration of the WHC waste collection hardware into the USOS including integration with U.S. Lab and Node 3 systems. 9. Light Microsopy Module, International Space Station Premier Automated Microscope Science.gov (United States) Meyer, William V.; Sicker, Ronald J.; Chiaramonte, Francis P.; Brown, Daniel F.; O'Toole, Martin A.; Foster, William M.; Motil, Brian J.; Abbot-Hearn, Amber Ashley; Atherton, Arthur Johnson; Beltram, Alexander; 2015-01-01 The Light Microscopy Module (LMM) was launched to the International Space Station (ISS) in 2009 and began science operations in 2010. It continues to support Physical and Biological scientific research on ISS. During 2015, if all goes as planned, five experiments will be completed: [1] Advanced Colloids Experiments with a manual sample base -3 (ACE-M-3), [2] the Advanced Colloids Experiment with a Heated Base -1 (ACE-H-1), [3] (ACE-H-2), [4] the Advanced Plant Experiment -03 (APEX-03), and [5] the Microchannel Diffusion Experiment (MDE). Preliminary results, along with an overview of present and future LMM capabilities will be presented; this includes details on the planned data imaging processing and storage system, along with the confocal upgrade to the core microscope. [1] New York University: Paul Chaikin, Andrew Hollingsworth, and Stefano Sacanna, [2] University of Pennsylvania: Arjun Yodh and Matthew Gratale, [3] a consortium of universities from the State of Kentucky working through the Experimental Program to Stimulate Competitive Research (EPSCoR): Stuart Williams, Gerold Willing, Hemali Rathnayake, et al., [4] from the University of Florida and CASIS: Anna-Lisa Paul and Rob Ferl, and [5] from the Methodist Hospital Research Institute from CASIS: Alessandro Grattoni and Giancarlo Canavese. 10. Science.gov (United States) Carlisle, R. F. 1982-01-01 Design guidelines and functional systems being considered in the process of defining the configuration of the automated systems for a manned space station are outlined. The requirements are dependent on life-cycle costing and will set the necessary level of automation, as well as autonomy from outside commands. Fault protection routines have been largely devised according to successful programming on the Voyager spacecraft. An analysis is still needed of the housekeeping functions, including human necessities, machine functions, and mission objectives. A data base will result, defining the functions that have historically been delegated to either man or machine. Care must be taken to coordinate and document stationkeeping functions that might interface with mission functions. A data management system that is flexible with regards to changing mission objectives and to the MTBF factors, which will determine the level of technology to be used is required. Expert systems will be integrated into the automation to guide the machines in problem solving, including ensuring adequate management of the battery subsystem. 11. CALET: High energy cosmic ray observatory on International Space Station Science.gov (United States) Mori, Masaki; CALET Collaboration 2012-12-01 The CALorimeteric Electron Telescope (CALET) is a Japanese-led international mission being developed as part of the utilization plan for the International Space Station (ISS). CALET will be launched by an H-II B rocket utilizing the Japanese developed HTV (H-II Transfer Vehicle) in 2014. The instrument will be robotically emplaced upon the Exposed Facility attached to the Japanese Experiment Module (JEM-EF). CALET is a calorimeter based instrument which will have superior energy resolution and excellent separation between hadrons and electrons and between charged particles and gamma rays in the GeV to trans-TeV energy range. CALET will address many questions in high energy astrophysics, including (1) the nature of the sources of high energy particles and photons, through the high energy electron spectrum, (2) signatures of dark matter, in either the high energy electron or gamma ray spectrum, (3) the details of particle propagation in the Galaxy, by a combination of energy spectrum measurements of electrons, protons and highercharged nuclei. In this paper the outline and current status of CALET are summarized. 12. Energy calibration of CALET onboard the International Space Station Science.gov (United States) Asaoka, Y.; Akaike, Y.; Komiya, Y.; Miyata, R.; Torii, S.; Adriani, O.; Asano, K.; Bagliesi, M. G.; Bigongiari, G.; Binns, W. R.; Bonechi, S.; Bongi, M.; Brogi, P.; Buckley, J. H.; Cannady, N.; Castellini, G.; Checchia, C.; Cherry, M. L.; Collazuol, G.; Di Felice, V.; Ebisawa, K.; Fuke, H.; Guzik, T. G.; Hams, T.; Hareyama, M.; Hasebe, N.; Hibino, K.; Ichimura, M.; Ioka, K.; Ishizaki, W.; Israel, M. H.; Javaid, A.; Kasahara, K.; Kataoka, J.; Kataoka, R.; Katayose, Y.; Kato, C.; Kawanaka, N.; Kawakubo, Y.; Kitamura, H.; Krawczynski, H. S.; Krizmanic, J. F.; Kuramata, S.; Lomtadze, T.; Maestro, P.; Marrocchesi, P. S.; Messineo, A. M.; Mitchell, J. W.; Miyake, S.; Mizutani, K.; Moiseev, A. A.; Mori, K.; Mori, M.; Mori, N.; Motz, H. M.; Munakata, K.; Murakami, H.; Nakagawa, Y. E.; Nakahira, S.; Nishimura, J.; Okuno, S.; Ormes, J. F.; Ozawa, S.; Pacini, L.; Palma, F.; Papini, P.; Penacchioni, A. V.; Rauch, B. F.; Ricciarini, S.; Sakai, K.; Sakamoto, T.; Sasaki, M.; Shimizu, Y.; Shiomi, A.; Sparvoli, R.; Spillantini, P.; Stolzi, F.; Takahashi, I.; Takayanagi, M.; Takita, M.; Tamura, T.; Tateyama, N.; Terasawa, T.; Tomida, H.; Tsunesada, Y.; Uchihori, Y.; Ueno, S.; Vannuccini, E.; Wefel, J. P.; Yamaoka, K.; Yanagita, S.; Yoshida, A.; Yoshida, K.; Yuda, T. 2017-05-01 In August 2015, the CALorimetric Electron Telescope (CALET), designed for long exposure observations of high energy cosmic rays, docked with the International Space Station (ISS) and shortly thereafter began to collect data. CALET will measure the cosmic ray electron spectrum over the energy range of 1 GeV to 20 TeV with a very high resolution of 2% above 100 GeV, based on a dedicated instrument incorporating an exceptionally thick 30 radiation-length calorimeter with both total absorption and imaging (TASC and IMC) units. Each TASC readout channel must be carefully calibrated over the extremely wide dynamic range of CALET that spans six orders of magnitude in order to obtain a degree of calibration accuracy matching the resolution of energy measurements. These calibrations consist of calculating the conversion factors between ADC units and energy deposits, ensuring linearity over each gain range, and providing a seamless transition between neighboring gain ranges. This paper describes these calibration methods in detail, along with the resulting data and associated accuracies. The results presented in this paper show that a sufficient accuracy was achieved for the calibrations of each channel in order to obtain a suitable resolution over the entire dynamic range of the electron spectrum measurement. 13. Design and performance oof space station photovoltaic radiators Science.gov (United States) White, K. Alan; Fleming, Mike L.; Lee, Avis Y. 1993-01-01 The design and performance of the Space Station Freedom Photovoltaic (PV) Power Module Thermal Control System radiators is presented. The PV Radiator is of a single phase pumped loop design using liquid ammonia as the coolant. Key design features are described, including the base structure, deployment mechanism, radiator panels, and two independent coolant loops. The basis for a specific mass of 7.8 kg/sqm is discussed, and methods of lowering this number for future systems are briefly described. Key performance paramters are also addressed. A summary of test results and analysis is presented to illustrate the survivability of the radiator in the micrometeoroid and orbital debris environment. A design criterion of 95% probability of no penetration of both fluid loops over a 10 year period is shown to be met. Methods of increasing the radiator survivability even further are presented. Thermal performance is also discussed, including a comparison of modeling predictions with existing test results. Degradation in thermal performance due to exposure to atomic oxygen and ultraviolet radiation in the low Earth orbit environment is presented. The structural criteria to which the radiator is designed are also briefly addressed. Finally, potential design improvements are discussed. 14. Upgrades to the International Space Station Water Recovery System Science.gov (United States) Kayatin, Matthew J.; Pruitt, Jennifer M.; Nur, Mononita; Takada, Kevin C.; Carter, Layne 2017-01-01 The International Space Station (ISS) Water Recovery System (WRS) includes the Water Processor Assembly (WPA) and the Urine Processor Assembly (UPA). The WRS produces potable water from a combination of crew urine (first processed through the UPA), crew latent, and Sabatier product water. Though the WRS has performed well since operations began in November 2008, several modifications have been identified to improve the overall system performance. These modifications aim to reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions. The following paper details efforts to improve the WPA through the use of reverse osmosis membrane technology to reduce the resupply mass of the WPA Multi-filtration Bed and improved catalyst for the WPA Catalytic Reactor to reduce the operational temperature and pressure. For the UPA, this paper discusses progress on various concepts for improving the reliability of the system, including the implementation of a more reliable drive belt, improved methods for managing condensate in the stationary bowl of the Distillation Assembly, and evaluating upgrades to the UPA vacuum pump. 15. Space station static and dynamic analyses using parallel methods Science.gov (United States) Gupta, V.; Newell, J.; Storaasli, O.; Baddourah, M.; Bostic, S. 1993-01-01 Algorithms for high-performance parallel computers are applied to perform static analyses of large-scale Space Station finite-element models (FEMs). Several parallel-vector algorithms under development at NASA Langley are assessed. Sparse matrix solvers were found to be more efficient than banded symmetric or iterative solvers for the static analysis of large-scale applications. In addition, new sparse and 'out-of-core' solvers were found superior to substructure (superelement) techniques which require significant additional cost and time to perform static condensation during global FEM matrix generation as well as the subsequent recovery and expansion. A method to extend the fast parallel static solution techniques to reduce the computation time for dynamic analysis is also described. The resulting static and dynamic algorithms offer design economy for preliminary multidisciplinary design optimization and FEM validation against test modes. The algorithms are being optimized for parallel computers to solve one-million degrees-of-freedom (DOF) FEMs. The high-performance computers at NASA afforded effective software development, testing, efficient and accurate solution with timely system response and graphical interpretation of results rarely found in industry. Based on the author's experience, similar cooperation between industry and government should be encouraged for similar large-scale projects in the future. 16. Enhanced International Space Station Ku-Band Telemetry Service Science.gov (United States) Cecil, Andrew J.; Pitts, R. Lee; Welch, Steven J.; Bryan, Jason D. 2014-01-01 17. Aerosol Sampling Experiment on the International Space Station Science.gov (United States) Meyer, Marit E. 2017-01-01 The International Space Station (ISS) is a unique indoor environment which serves as both home and workplace to the astronaut crew. There is currently no particulate monitoring, although particulate matter requirements exist. An experiment to collect particles in the ISS cabin was conducted recently. Two different aerosol samplers were used for redundancy and to collect particles in two size ranges spanning from 10 nm to hundreds of micrometers. The Active Sampler is a battery operated thermophoretic sampler with an internal pump which draws in air and collects particles directly on a transmission electron microscope grid. This commercial-off-the-shelf device was modified for operation in low gravity. The Passive Sampler has five sampling surfaces which were exposed to air for different durations in order to collect at least one sample with an optimal quantity of particles for microscopy. These samples were returned to Earth for analysis with a variety of techniques to obtain long-term average concentrations and identify particle emission sources. Results are compared with the inventory of ISS aerosols which was created based on sparse data and the literature. The goal of the experiment is to obtain data on indoor aerosols on ISS for future particulate monitor design and development. 18. Flywheel Energy Storage System Designed for the International Space Station Science.gov (United States) Delventhal, Rex A. 2002-01-01 Following successful operation of a developmental flywheel energy storage system in fiscal year 2000, researchers at the NASA Glenn Research Center began developing a flight design of a flywheel system for the International Space Station (ISS). In such an application, a two-flywheel system can replace one of the nickel-hydrogen battery strings in the ISS power system. The development unit, sized at approximately one-eighth the size needed for ISS was run at 60,000 rpm. The design point for the flight unit is a larger composite flywheel, approximately 17 in. long and 13 in. in diameter, running at 53,000 rpm when fully charged. A single flywheel system stores 2.8 kW-hr of useable energy, enough to light a 100-W light bulb for over 24 hr. When housed in an ISS orbital replacement unit, the flywheel would provide energy storage with approximately 3 times the service life of the nickel-hydrogen battery currently in use. 19. International Space Station (ISS) Advanced Recycle Filter Tank Assembly (ARFTA) Science.gov (United States) Nasrullah, Mohammed K. 2013-01-01 The International Space Station (ISS) Recycle Filter Tank Assembly (RFTA) provides the following three primary functions for the Urine Processor Assembly (UPA): volume for concentrating/filtering pretreated urine, filtration of product distillate, and filtration of the Pressure Control and Pump Assembly (PCPA) effluent. The RFTAs, under nominal operations, are to be replaced every 30 days. This poses a significant logistical resupply problem, as well as cost in upmass and new tanks purchase. In addition, it requires significant amount of crew time. To address and resolve these challenges, NASA required Boeing to develop a design which eliminated the logistics and upmass issues and minimize recurring costs. Boeing developed the Advanced Recycle Filter Tank Assembly (ARFTA) that allowed the tanks to be emptied on-orbit into disposable tanks that eliminated the need for bringing the fully loaded tanks to earth for refurbishment and relaunch, thereby eliminating several hundred pounds of upmass and its associated costs. The ARFTA will replace the RFTA by providing the same functionality, but with reduced resupply requirements 20. Rapid toxicity detection in water quality control utilizing automated multispecies biomonitoring for permanent space stations Science.gov (United States) Morgan, E. L.; Young, R. C.; Smith, M. D.; Eagleson, K. W. 1986-01-01 The objective of this study was to evaluate proposed design characteristics and applications of automated biomonitoring devices for real-time toxicity detection in water quality control on-board permanent space stations. Simulated tests in downlinking transmissions of automated biomonitoring data to Earth-receiving stations were simulated using satellite data transmissions from remote Earth-based stations. 1. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0 Data.gov (United States) National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to detect the... 2. NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds Vb0 Data.gov (United States) National Aeronautics and Space Administration — The NRT Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds dataset was collected by the LIS instrument on the ISS used to detect the... 3. Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds Vb0 Data.gov (United States) National Aeronautics and Space Administration — The Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Backgrounds dataset was collected by the LIS instrument on the ISS... 4. Fungal Spores Viability on the International Space Station Science.gov (United States) Gomoiu, I.; Chatzitheodoridis, E.; Vadrucci, S.; Walther, I.; Cojoc, R. 2016-11-01 In this study we investigated the security of a spaceflight experiment from two points of view: spreading of dried fungal spores placed on the different wafers and their viability during short and long term missions on the International Space Station (ISS). Microscopic characteristics of spores from dried spores samples were investigated, as well as the morphology of the colonies obtained from spores that survived during mission. The selected fungal species were: Aspergillus niger, Cladosporium herbarum, Ulocladium chartarum, and Basipetospora halophila. They have been chosen mainly based on their involvement in the biodeterioration of different substrate in the ISS as well as their presence as possible contaminants of the ISS. From biological point of view, three of the selected species are black fungi, with high melanin content and therefore highly resistant to space radiation. The visual inspection and analysis of the images taken before and after the short and the long term experiments have shown that all biocontainers were returned to Earth without damages. Microscope images of the lids of the culture plates revealed that the spores of all species were actually not detached from the surface of the wafers and did not contaminate the lids. From the adhesion point of view all types of wafers can be used in space experiments, with a special comment on the viability in the particular case of iron wafers when used for spores that belong to B. halophila (halophilic strain). This is encouraging in performing experiments with fungi without risking contamination. The spore viability was lower in the experiment for long time to ISS conditions than that of the short experiment. From the observations, it is suggested that the environment of the enclosed biocontainer, as well as the species'specific behaviour have an important effect, reducing the viability in time. Even the spores were not detached from the surface of the wafers, it was observed that spores used in the 5. Standard payload computer for the international space station Science.gov (United States) Knott, Karl; Taylor, Chris; Koenig, Horst; Schlosstein, Uwe 1999-01-01 6. Cerebral vascular reactivity on return from the International Space Station Science.gov (United States) Zuj, Kathryn; Greaves, Danielle; Shoemaker, Kevin; Blaber, Andrew; Hughson, Richard L. Returning from spaceflight, astronauts experience a high incidence of orthostatic intolerance and syncope. Longer duration space flight may result in greater adaptations to microgravity which could increase the post-flight incidence of syncope. CCISS (Cardiovascular and Cerebovascular Control on return from the International Space Station) is an ongoing project designed to help determine adaptations that occur during spaceflight which may contribute to orthostatic intolerance. One component of this project involves looking at cerebral vascular responses before and after long duration spaceflight. As a known vasodilator, carbon dioxide (CO2) has been frequently used to assess changes in cerebral vascular reactivity. In this experiment, end tidal PCO2 was manipulated through changes in respired air. Two breaths of a 10% CO2 gas mixture were administered at 1-min intervals resulting in an increase in end tidal PCO2 . Throughout the testing, cerebral blood flow velocity (CBFV) was determined using transcranial Doppler ultrasound. The cerebral resistance index (RI) was calculated from the Doppler wave form using the equation; RI=(CBFVsystolic-CBFVdiastolic)/CBFVsystolic. Changes in this index have been shown to reflect changes in cerebral vascular resistance. Peak responses to the CO2 stimulus were determined and compared to baseline measures taken at the beginning of the testing. Cerebral blood flow velocity increased and RI decreased with the two breaths of CO2. Preliminary data show a 36.0% increase in CBFV and a 9.0% decrease in RI pre-flight. Post flight, the response to CO2 appears to change showing a potentially blunted decrease in resistance (6.8%) and a smaller increase in CBFV (22.8%). Long term spaceflight may result in cerebrovascular changes which could decrease the vasodilatory capacity of cerebral resistance vessels. Further investigations in the CCISS project will reveal the interactive role of CO2 and arterial blood pressure on maintenance of brain 7. Correlation techniques as applied to pose estimation in space station docking Science.gov (United States) Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr. 2002-08-01 The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed. 8. Space station as a vital focus for advancing the technologies of automation and robotics Science.gov (United States) Varsi, Giulio; Herman, Daniel H. A major guideline for the design of the United States's Space Station is that the Space Station address a wide variety of functions. These functions include the servicing of unmanned assets in space, the support of commercial laboratories in space and the efficient management of the Space Station itself: the largest space asset. For the Space Station to address successfully these and other functions, the operating costs must be minimized. Furthermore, crew time in space will be an exceedingly scarce and valuable commodity. The human operator should perform only those tasks that are unique in demanding the use of the human creative capability in coping with unanticipated events. The technologies of automation and robotics (A & R) have the promise to help in reducing Space Station operating costs and to achieve a highly efficient use of the human in space. The use of advanced automation and artificial intelligence techniques, such as expert systems, in Space Station subsystems for activity planning and failure mode management will enable us to reduce dependency on a mission control center and could ultimately result in breaking the umbilical link from Earth to the Space Station. The application of robotic technologies with advanced perception capability and hierarchical intelligent control to servicing systems will enable us to service assets either at the Space Station or in situ with a high degree of human efficiency. This paper presents the results of studies conducted by NASA and its contractors, at the urging of the Congress, leading toward the formulation of an automation and robotics plan for Space Station development. 9. Integrating Space Flight Resource Management Skills into Technical Lessons for International Space Station Flight Controller Training Science.gov (United States) Baldwin, Evelyn 2008-01-01 The Johnson Space Center s (JSC) International Space Station (ISS) Space Flight Resource Management (SFRM) training program is designed to teach the team skills required to be an effective flight controller. It was adapted from the SFRM training given to Shuttle flight controllers to fit the needs of a "24 hours a day/365 days a year" flight controller. More recently, the length reduction of technical training flows for ISS flight controllers impacted the number of opportunities for fully integrated team scenario based training, where most SFRM training occurred. Thus, the ISS SFRM training program is evolving yet again, using a new approach of teaching and evaluating SFRM alongside of technical materials. Because there are very few models in other industries that have successfully tied team and technical skills together, challenges are arising. Despite this, the Mission Operations Directorate of NASA s JSC is committed to implementing this integrated training approach because of the anticipated benefits. 10. Experimenting Galileo on Board the International Space Station Science.gov (United States) Fantinato, Samuele; Pozzobon, Oscar; Sands, Obed S.; Welch, Bryan W.; Clapper, Carolyn J.; Miller, James J.; Gamba, Giovanni; Chiara, Andrea; Montagner, Stefano; Giordano, Pietro; 2016-01-01 11. Science.gov (United States) 2010-10-01 ... the Direct Broadcast Satellite Service. 25.215 Section 25.215 Telecommunication FEDERAL COMMUNICATIONS... Technical requirements for space stations in the Direct Broadcast Satellite Service. In addition to § 25.148(f), space station antennas operating in the Direct Broadcast Satellite Service must be designed to... 12. Manned space station environmental control and life support system computer-aided technology assessment program Science.gov (United States) Hall, J. B., Jr.; Pickett, S. J.; Sage, K. H. 1984-01-01 A computer program for assessing manned space station environmental control and life support systems technology is described. The methodology, mission model parameters, evaluation criteria, and data base for 17 candidate technologies for providing metabolic oxygen and water to the crew are discussed. Examples are presented which demonstrate the capability of the program to evaluate candidate technology options for evolving space station requirements. 13. Potential applications of expert systems and operations research to space station logistics functions Science.gov (United States) Lippiatt, Thomas F.; Waterman, Donald 1985-01-01 The applicability of operations research, artificial intelligence, and expert systems to logistics problems for the space station were assessed. Promising application areas were identified for space station logistics. A needs assessment is presented and a specific course of action in each area is suggested. 14. Integrated dynamic analysis simulation of space stations with controllable solar arrays (supplemental data and analyses) Science.gov (United States) Heinrichs, J. A.; Fee, J. J. 1972-01-01 Space station and solar array data and the analyses which were performed in support of the integrated dynamic analysis study. The analysis methods and the formulated digital simulation were developed. Control systems for space station altitude control and solar array orientation control include generic type control systems. These systems have been digitally coded and included in the simulation. 15. Microbial detection and monitoring in advanced life support systems like the International Space Station NARCIS (Netherlands) van Tongeren, Sandra P.; Krooneman, Janneke; Raangs, Gerwin C.; Welling, Gjalt W.; Harmsen, Hermie J. M. 2006-01-01 Potentially pathogenic microbes and so-called technophiles may form a serious threat in advanced life support systems, such as the International Space Station (ISS). They not only pose a threat to the health of the crew, but also to the technical equipment and materials of the space station. The 16. Microbial detection and monitoring in advanced life support systems like the international space station NARCIS (Netherlands) van Tongeren, Sandra P.; Krooneman, Janneke; Raangs, Gerwin C.; Welling, Gjalt W.; Harmsen, Hermie J. M. 2007-01-01 Potentially pathogenic microbes and so-called technophiles may form a serious threat in advanced life support systems, such as the International Space Station (ISS). They not only pose a threat to the health of the crew, but also to the technical equipment and materials of the space station. The 17. Modular space station, phase B extension. Information management advanced development. Volume 5: Software assembly Science.gov (United States) Gerber, C. R. 1972-01-01 The development of uniform computer program standards and conventions for the modular space station is discussed. The accomplishments analyzed are: (1) development of computer program specification hierarchy, (2) definition of computer program development plan, and (3) recommendations for utilization of all operating on-board space station related data processing facilities. 18. Overhead Costs: Costs Charged by McDonnell Douglas Aerospace’s Space Station Division Science.gov (United States) 1994-06-23 contains few limits on employee education expenses. Additional FAR coverage or other guidance on these areas may be needed. The sustention of DCAA...such as exists with the Space Station Division. Sustention of DCAA The most recent indirect expense rate negotiations completed at the Space Station 19. Microgravity heat pump for space station thermal management. Science.gov (United States) Domitrovic, R E; Chen, F C; Mei, V C; Spezia, A L 2003-01-01 A highly efficient recuperative vapor compression heat pump was developed and tested for its ability to operate independent of orientation with respect to gravity while maximizing temperature lift. The objective of such a heat pump is to increase the temperature of, and thus reduce the size of, the radiative heat rejection panels on spacecrafts such as the International Space Station. Heat pump operation under microgravity was approximated by gravitational-independent experiments. Test evaluations include functionality, efficiency, and temperature lift. Commercially available components were used to minimize costs of new hardware development. Testing was completed on two heat pump design iterations--LBU-I and LBU--II, for a variety of operating conditions under the variation of several system parameters, including: orientation, evaporator water inlet temperature (EWIT), condenser water inlet temperature (CWIT), and compressor speed. The LBU-I system employed an ac motor, belt-driven scroll compressor, and tube-in-tube heat exchangers. The LBU-II system used a direct-drive AC motor compressor assembly and plate heat exchangers. The LBU-II system in general outperformed the LBU-I system on all accounts. Results are presented for all systems, showing particular attention to those states that perform with a COP of 4.5 +/- 10% and can maintain a temperature lift of 55 degrees F (30.6 degrees C) +/- 10%. A calculation of potential radiator area reduction shows that points with maximum temperature lift give the greatest potential for reduction, and that area reduction is a function of heat pump efficiency and a stronger function of temperature lift. 20. Materials Science Research Rack Onboard the International Space Station Science.gov (United States) Reagan, S. E.; Lehman, J. R.; Frazier, N. C. 2016-01-01 The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009 and currently resides in the U.S. Destiny Laboratory Module. Since that time, MSRR has logged more than 1400 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials, including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. The NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA-developed Materials Science Laboratory (MSL) that accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400degC. ESA continues to develop samples with 14 planned for launch and processing in the near future. Additionally NASA has begun developing SCAs to 1. Detection of DNA damage by space radiation in human fibroblasts flown on the International Space Station Science.gov (United States) Lu, Tao; Zhang, Ye; Wong, Michael; Feiveson, Alan; Gaza, Ramona; Stoffle, Nicholas; Wang, Huichen; Wilson, Bobby; Rohde, Larry; Stodieck, Louis; Karouia, Fathi; Wu, Honglu 2017-02-01 Although charged particles in space have been detected with radiation detectors on board spacecraft since the discovery of the Van Allen Belts, reports on the effects of direct exposure to space radiation in biological systems have been limited. Measurement of biological effects of space radiation is challenging due to the low dose and low dose rate nature of the radiation environment, and due to the difficulty in distinguishing the radiation effects from microgravity and other space environmental factors. In astronauts, only a few changes, such as increased chromosome aberrations in their lymphocytes and early onset of cataracts, are attributed primarily to their exposure to space radiation. In this study, cultured human fibroblasts were flown on the International Space Station (ISS). Cells were kept at 37 °C in space for 14 days before being fixed for analysis of DNA damage with the γ-H2AX assay. The 3-dimensional γ-H2AX foci were captured with a laser confocal microscope. Quantitative analysis revealed several foci that were larger and displayed a track pattern only in the Day 14 flight samples. To confirm that the foci data from the flight study was actually induced from space radiation exposure, cultured human fibroblasts were exposed to low dose rate γ rays at 37 °C. Cells exposed to chronic γ rays showed similar foci size distribution in comparison to the non-exposed controls. The cells were also exposed to low- and high-LET protons, and high-LET Fe ions on the ground. Our results suggest that in G1 human fibroblasts under the normal culture condition, only a small fraction of large size foci can be attributed to high-LET radiation in space. 2. Detection of DNA Damage by Space Radiation in Human Fibroblasts Flown on the International Space Station Science.gov (United States) Lu, Tao; Zhang, Ye; Wong, Michael; Feiveson, Alan; Gaza, Ramona; Stoffle, Nicholas; Wang, Huichen; Wilson, Bobby; Rohde, Larry; Stodieck, Louis; 2017-01-01 Although charged particles in space have been detected with radiation detectors on board spacecraft since the discovery of the Van Allen Belts, reports on the effects of direct exposure to space radiation in biological systems have been limited. Measurement of biological effects of space radiation is challenging due to the low dose and low dose rate nature of the radiation environment, and due to the difficulty in distinguishing the radiation effects from microgravity and other space environmental factors. In astronauts, only a few changes, such as increased chromosome aberrations in their lymphocytes and early onset of cataracts, are attributed primarily to their exposure to space radiation. In this study, cultured human fibroblasts were flown on the International Space Station (ISS). Cells were kept at 37 degrees Centigrade in space for 14 days before being fixed for analysis of DNA damages with the gamma-H2AX assay. The 3-dimensional gamma-H2AX foci were captured with a laser confocal microscope. Quantitative analysis revealed several foci that were larger and displayed a track pattern only in the Day 14 flight samples. To confirm that the foci data from the flight study was actually induced from space radiation exposure, cultured human fibroblasts were exposed to low dose rate gamma rays at 37 degrees Centigrade. Cells exposed to chronic gamma rays showed similar foci size distribution in comparison to the non-exposed controls. The cells were also exposed to low- and high-LET (Linear Energy Transfer) protons, and high-LET Fe ions on the ground. Our results suggest that in G1 human fibroblasts under the normal culture condition, only a small fraction of large size foci can be attributed to high-LET radiation in space. 3. Directory of Open Access Journals (Sweden) David A. Coil 2016-03-01 Full Text Available Background. While significant attention has been paid to the potential risk of pathogenic microbes aboard crewed spacecraft, the non-pathogenic microbes in these habitats have received less consideration. Preliminary work has demonstrated that the interior of the International Space Station (ISS has a microbial community resembling those of built environments on Earth. Here we report the results of sending 48 bacterial strains, collected from built environments on Earth, for a growth experiment on the ISS. This project was a component of Project MERCCURI (Microbial Ecology Research Combining Citizen and University Researchers on ISS. Results. Of the 48 strains sent to the ISS, 45 of them showed similar growth in space and on Earth using a relative growth measurement adapted for microgravity. The vast majority of species tested in this experiment have also been found in culture-independent surveys of the ISS. Only one bacterial strain showed significantly different growth in space. Bacillus safensis JPL-MERTA-8-2 grew 60% better in space than on Earth. Conclusions. The majority of bacteria tested were not affected by conditions aboard the ISS in this experiment (e.g., microgravity, cosmic radiation. Further work on Bacillus safensis could lead to interesting insights on why this strain grew so much better in space. 4. Space station automation study: Automation requriements derived from space manufacturing concepts,volume 2 Science.gov (United States) 1984-01-01 Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe. 5. The expanded role of computers in Space Station Freedom real-time operations Science.gov (United States) Crawford, R. Paul; Cannon, Kathleen V. 1990-01-01 The challenges that NASA and its international partners face in their real-time operation of the Space Station Freedom necessitate an increased role on the part of computers. In building the operational concepts concerning the role of the computer, the Space Station program is using lessons learned experience from past programs, knowledge of the needs of future space programs, and technical advances in the computer industry. The computer is expected to contribute most significantly in real-time operations by forming a versatile operating architecture, a responsive operations tool set, and an environment that promotes effective and efficient utilization of Space Station Freedom resources. 6. Winter bait stations as a multispecies survey tool Science.gov (United States) Lacy Robinson; Samuel A. Cushman; Michael K. Lucid 2017-01-01 Winter bait stations are becoming a commonly used technique for multispecies inventory and monitoring but a technical evaluation of their effectiveness is lacking. Bait stations have three components: carcass attractant, remote camera, and hair snare. Our 22,975 km2 mountainous study area was stratified with a 5 × 5 km sampling grid centered on northern Idaho and... 7. Robonaut 2 - Building a Robot on the International Space Station Science.gov (United States) Diftler, Myron; Badger, Julia; Joyce, Charles; Potter, Elliott; Pike, Leah 2015-01-01 8. Space Medicine: Shuttle - Space Station Crew Health and Safety Challenges for Exploration Science.gov (United States) Dervay, Joseph 2010-01-01 This slide presentation combines some views of the shuttle take off, and the shuttle and space station on orbit, and some views of the underwater astronaut training , with a general discussion of Space Medicine. It begins with a discussion of the some of the physiological issues of space flight. These include: Space Motion Sickness (SMS), Cardiovascular, Neurovestibular, Musculoskeletal, and Behavioral/Psycho-social. There is also discussion of the space environment and the issues that are posed including: Radiation, Toxic products and propellants, Habitability, Atmosphere, and Medical events. Included also is a discussion of the systems and crew training. There are also artists views of the Constellation vehicles, the planned lunar base, and extended lunar settlement. There are also slides showing the size of earth in perspective to the other planets, and the sun and the sun in perspective to other stars. There is also a discussion of the in-flight changes that occur in neural feedback that produces postural imbalance and loss of coordination after return. 9. Space vehicle field unit and ground station system Energy Technology Data Exchange (ETDEWEB) Judd, Stephen; Dallmann, Nicholas; Delapp, Jerry; Proicou, Michael; Seitz, Daniel; Michel, John; Enemark, Donald 2017-09-19 A field unit and ground station may use commercial off-the-shelf (COTS) components and share a common architecture, where differences in functionality are governed by software. The field units and ground stations may be easy to deploy, relatively inexpensive, and be relatively easy to operate. A novel file system may be used where datagrams of a file may be stored across multiple drives and/or devices. The datagrams may be received out of order and reassembled at the receiving device. 10. The Era of International Space Station Research: Discoveries and Potential of an Unprecedented Laboratory in Space Science.gov (United States) Robinson, Julie A. 2011-01-01 The assembly of the International Space Station was completed in early 2011. Its largest research instrument, the Alpha Magnetic Spectrometer is planned for launch in late April. Unlike any previous laboratory in space, the ISS offers a long term platform where scientists can operate experiments rapidly after developing a new research question, and extend their experiments based on early results. This presentation will explain why having a laboratory in orbit is important for a wide variety of experiments that cannot be done on Earth. Some of the most important results from early experiments are already having impacts in areas such as health care, telemedicine, and disaster response. The coming decade of full utilization offers the promise of new understanding of the nature of physical and biological processes and even of matter itself. 11. Fifteen-foot diameter modular space station Kennedy Space Center launch site support definition (space station program Phase B extension definition) Science.gov (United States) Bjorn, L. C.; Martin, M. L.; Murphy, C. W.; Niebla, J. F., V 1971-01-01 This document defines the facilities, equipment, and operational plans required to support the MSS Program at KSC. Included is an analysis of KSC operations, a definition of flow plans, facility utilization and modifications, test plans and concepts, activation, and tradeoff studies. Existing GSE and facilities that have a potential utilization are identified, and new items are defined where possible. The study concludes that the existing facilities are suitable for use in the space station program without major modification from the Saturn-Apollo configuration. 12. Technology forecast and applications for autonomous, intelligent systems. [for space station, shuttle, and interplanetary missions Science.gov (United States) Lum, Henry, Jr.; Heer, Ewald 1988-01-01 Significant research products which have emerged from the core program of NASA's Office of Aeronautics and Space Technology (OAST) are discussed. The Space Station Thermal Control System, the Space Shuttle Integrated Communications Officer Station, the Launch Processing System, the Expert Scheduling System for Pioneer Venus Spacecraft, a Bayesian classification system, and a spaceborne multiprocessor system are included. The technology trends which led to these results are discussed and future developments in technology are forecasted. 13. Ballistic limit regression analysis for Space Station Freedom meteoroid and space debris protection system Science.gov (United States) Jolly, William H. 1992-01-01 Relationships defining the ballistic limit of Space Station Freedom's (SSF) dual wall protection systems have been determined. These functions were regressed from empirical data found in Marshall Space Flight Center's (MSFC) Hypervelocity Impact Testing Summary (HITS) for the velocity range between three and seven kilometers per second. A stepwise linear least squares regression was used to determine the coefficients of several expressions that define a ballistic limit surface. Using statistical significance indicators and graphical comparisons to other limit curves, a final set of expressions is recommended for potential use in Probability of No Critical Flaw (PNCF) calculations for Space Station. The three equations listed below represent the mean curves for normal, 45 degree, and 65 degree obliquity ballistic limits, respectively, for a dual wall protection system consisting of a thin 6061-T6 aluminum bumper spaced 4.0 inches from a .125 inches thick 2219-T87 rear wall with multiple layer thermal insulation installed between the two walls. Normal obliquity is d(sub c) = 1.0514 v(exp 0.2983 t(sub 1)(exp 0.5228). Forty-five degree obliquity is d(sub c) = 0.8591 v(exp 0.0428) t(sub 1)(exp 0.2063). Sixty-five degree obliquity is d(sub c) = 0.2824 v(exp 0.1986) t(sub 1)(exp -0.3874). Plots of these curves are provided. A sensitivity study on the effects of using these new equations in the probability of no critical flaw analysis indicated a negligible increase in the performance of the dual wall protection system for SSF over the current baseline. The magnitude of the increase was 0.17 percent over 25 years on the MB-7 configuration run with the Bumper II program code. 14. Multispecies Biofilm Development on Space Station Heat Exhanger Core Material Science.gov (United States) Pyle, B. H.; Roth, S. R.; Vega, L. M.; Pickering, K. D.; Alvarez, Pedro J. J.; Roman, M. C. 2007-01-01 Investigations of microbial contamination of the cooling system aboard the International Space Station (ISS) suggested that there may be a relationship between heat exchanger (HX) materials and the degree of microbial colonization and biofilm formation. Experiments were undertaken to test the hypothesis that biofilm formation is influenced by the type and previous exposure of HX surfaces. Acidovorax delafieldii, Comamonas acidovorans, Hydrogenophaga pseudoflava, Pseudomonas stutzeri, Sphingomonas paucimobilis, and Stenotrophomonas maltophilia, originally isolated from ISS cooling system fluid, were cultured on R2A agar and suspended separately in fresh filter-sterilized ISS cooling fluid, pH 8.3. Initial numbers in each suspension ranged from 10(exp 6)-10(exp 7) CFU/ml, and a mixture contained greater than 10(exp 7) CFU/ml. Coupons of ISS HX material, previously used on orbit (HXOO) or unused (HXUU), polycarbonate (PC) and 316L polished stainless steel (SS) were autoclaved, covered with multispecies suspension in sterile tubes and incubated in the dark at ambient (22-25 C). Original HX material contained greater than 90% Ni, 4.5% Si, and 3.2% B, with a borate buffer. For approximately 10 weeks, samples of fluid were plated on R2A agar, and surface colonization assessed by SYBR green or BacLight staining and microscopy. Suspension counts for the PC and SC samples remained steady at around 10(exp 7) CFU/ml. HXUU counts declined about 1 log in 21 d then remained steady, and HXOO counts declined 2 logs in 28 d, fluctuated and stabilized about 10(exp 3) CFU/ml from 47-54 d. Predominantly yellow S. paucimobilis predominated on plates from HXOO samples up to 26 d, then white or translucent colonies of other species appeared. All colony types were seen on plates from other samples throughout the trial. Epifluorescence microscopy indicated microbial growth on all surfaces by 21 d, followed by variable colonization. After 54 d, all but the HXOO samples had well 15. Life sciences flight hardware development for the International Space Station Science.gov (United States) Kern, V. D.; Bhattacharya, S.; Bowman, R. N.; Donovan, F. M.; Elland, C.; Fahlen, T. F.; Girten, B.; Kirven-Brooks, M.; Lagel, K.; Meeker, G. B.; Santos, O. During the construction phase of the International Space Station (ISS), early flight opportunities have been identified (including designated Utilization Flights, UF) on which early science experiments may be performed. The focus of NASA's and other agencies' biological studies on the early flight opportunities is cell and molecular biology; with UF-1 scheduled to fly in fall 2001, followed by flights 8A and UF-3. Specific hardware is being developed to verify design concepts, e.g., the Avian Development Facility for incubation of small eggs and the Biomass Production System for plant cultivation. Other hardware concepts will utilize those early research opportunities onboard the ISS, e.g., an Incubator for sample cultivation, the European Modular Cultivation System for research with small plant systems, an Insect Habitat for support of insect species. Following the first Utilization Flights, additional equipment will be transported to the ISS to expand research opportunities and capabilities, e.g., a Cell Culture Unit, the Advanced Animal Habitat for rodents, an Aquatic Facility to support small fish and aquatic specimens, a Plant Research Unit for plant cultivation, and a specialized Egg Incubator for developmental biology studies. Host systems (Figure 1A, B), e.g., a 2.5 m Centrifuge Rotor (g-levels from 0.01-g to 2-g) for direct comparisons between μg and selectable g levels, the Life Sciences Glove☐ for contained manipulations, and Habitat Holding Racks (Figure 1B) will provide electrical power, communication links, and cooling to the habitats. Habitats will provide food, water, light, air and waste management as well as humidity and temperature control for a variety of research organisms. Operators on Earth and the crew on the ISS will be able to send commands to the laboratory equipment to monitor and control the environmental and experimental parameters inside specific habitats. Common laboratory equipment such as microscopes, cryo freezers, radiation 16. Foot forces during exercise on the International Space Station. Science.gov (United States) Genc, K O; Gopalakrishnan, R; Kuklis, M M; Maender, C C; Rice, A J; Bowersox, K D; Cavanagh, P R 2010-11-16 Long-duration exposure to microgravity has been shown to have detrimental effects on the human musculoskeletal system. To date, exercise countermeasures have been the primary approach to maintain bone and muscle mass and they have not been successful. Up until 2008, the three exercise countermeasure devices available on the International Space Station (ISS) were the treadmill with vibration isolation and stabilization (TVIS), the cycle ergometer with vibration isolation and stabilization (CEVIS), and the interim resistance exercise device (iRED). This article examines the available envelope of mechanical loads to the lower extremity that these exercise devices can generate based on direct in-shoe force measurements performed on the ISS. Four male crewmembers who flew on long-duration ISS missions participated in this study. In-shoe forces were recorded during activities designed to elicit maximum loads from the various exercise devices. Data from typical exercise sessions on Earth and on-orbit were also available for comparison. Maximum on-orbit single-leg loads from TVIS were 1.77 body weight (BW) while running at 8mph. The largest single-leg forces during resistance exercise were 0.72 BW during single-leg heel raises and 0.68 BW during double-leg squats. Forces during CEVIS exercise were small, approaching only 0.19 BW at 210W and 95RPM. We conclude that the three exercise devices studied were not able to elicit loads comparable to exercise on Earth, with the exception of CEVIS at its maximal setting. The decrements were, on average, 77% for walking, 75% for running, and 65% for squats when each device was at its maximum setting. Future developments must include an improved harness to apply higher gravity replacement loads during locomotor exercise and the provision of greater resistance exercise capability. The present data set provides a benchmark that will enable future researchers to judge whether or not the new generation of exercise countermeasures recently 17. Zeolite thin films: from computer chips to space stations. Science.gov (United States) Lew, Christopher M; Cai, Rui; Yan, Yushan 2010-02-16 Zeolites are a class of crystalline oxides that have uniform and molecular-sized pores (3-12 A in diameter). Although natural zeolites were first discovered in 1756, significant commercial development did not begin until the 1950s when synthetic zeolites with high purity and controlled chemical composition became available. Since then, major commercial applications of zeolites have been limited to catalysis, adsorption, and ion exchange, all using zeolites in powder form. Although researchers have widely investigated zeolite thin films within the last 15 years, most of these studies were motivated by the potential application of these materials as separation membranes and membrane reactors. In the last decade, we have recognized and demonstrated that zeolite thin films can have new, diverse, and economically significant applications that others had not previously considered. In this Account, we highlight our work on the development of zeolite thin films as low-dielectric constant (low-k) insulators for future generation computer chips, environmentally benign corrosion-resistant coatings for aerospace alloys, and hydrophilic and microbiocidal coatings for gravity-independent water separation in space stations. Although these three applications might not seem directly related, they all rely on the ability to fine-tune important macroscopic properties of zeolites by changing their ratio of silicon to aluminum. For example, pure-silica zeolites (PSZs, Si/Al = infinity) are hydrophobic, acid stable, and have no ion exchange capacity, while low-silica zeolites (LSZs, Si/Al zeolites that have not been exploited before, such as a higher elastic modulus, hardness, and heat conductivity than those of amorphous porous silicas, and microbiocidal capabilities derived from their ion exchange capacities. Finally, we briefly discuss our more recent work on polycrystalline zeolite thin films as promising biocompatible coatings and environmentally benign wear-resistant and 18. Biomechanical Analysis of Treadmill Locomotion on the International Space Station Science.gov (United States) De Witt, J. K.; Fincke, R. S.; Guilliams, M. E.; Ploutz-Snyder, L. L. 2011-01-01 19. Rodent Research on the International Space Station - A Look Forward Science.gov (United States) Kapusta, A. B.; Smithwick, M.; Wigley, C. L. 2014-01-01 Rodent Research on the International Space Station (ISS) is one of the highest priority science activities being supported by NASA and is planned for up to two flights per year. The first Rodent Research flight, Rodent Research-1 (RR-1) validates the hardware and basic science operations (dissections and tissue preservation). Subsequent flights will add new capabilities to support rodent research on the ISS. RR-1 will validate the following capabilities: animal husbandry for up to 30 days, video downlink to support animal health checks and scientific analysis, on-orbit dissections, sample preservation in RNA. Later and formalin, sample transfer from formalin to ethanol (hindlimbs), rapid cool-down and subsequent freezing at -80 of tissues and carcasses, sample return and recovery. RR-2, scheduled for SpX-6 (Winter 20142015) will add the following capabilities: animal husbandry for up to 60 days, RFID chip reader for individual animal identification, water refill and food replenishment, anesthesia and recovery, bone densitometry, blood collection (via cardiac puncture), blood separation via centrifugation, soft tissue fixation in formalin with transfer to ethanol, and delivery of injectable drugs that require frozen storage prior to use. Additional capabilities are also planned for future flights and these include but are not limited to male mice, live animal return, and the development of experiment unique equipment to support science requirements for principal investigators that are selected for flight. In addition to the hardware capabilities to support rodent research the Crew Office has implemented a training program in generic rodent skills for all USOS crew members during their pre-assignment training rotation. This class includes training in general animal handling, euthanasia, injections, and dissections. The dissection portion of this training focuses on the dissection of the spleen, liver, kidney with adrenals, brain, eyes, and hindlimbs. By achieving and 20. Eclipse of the Floating Orbs: Controlling Robots on the International Space Station Science.gov (United States) Wheeler, D. W. 2017-01-01 I will describe the Control Station for a free-flying robot called Astrobee. Astrobee will serve as a mobile camera, sensor platform, and research testbed when it is launched to the International Space Station (ISS)in 2017. Astronauts on the ISS as well as ground-based users will control Astrobee using the Eclipse-based Astrobee Control Station. Designing theControl Station for use in space presented unique challenges, such as allowing the intuitive input of 3D information without a mouse or trackpad. Come to this talk to learn how Eclipse is used in an environment few humans have the chance to visit. 1. Why Deep Space Habitats Should Be Different from the International Space Station Science.gov (United States) Griffin, Brand; Brown, MacAulay 2016-01-01 2. Biomechanics of the Treadmill Locomotion on the International Space Station Science.gov (United States) DeWitt, John; Cromwell, R. L.; Ploutz-Snyder, L. L. 2014-01-01 Exercise prescriptions completed by International Space Station (ISS) crewmembers are typically based upon evidence obtained during ground-based investigations, with the assumption that the results of long-term training in weightlessness will be similar to that attained in normal gravity. Coupled with this supposition are the assumptions that exercise motions and external loading are also similar between gravitational environments. Normal control of locomotion is dependent upon learning patterns of muscular activation and requires continual monitoring of internal and external sensory input [1]. Internal sensory input includes signals that may be dependent on or independent of gravity. Bernstein hypothesized that movement strategy planning and execution must include the consideration of segmental weights and inertia [2]. Studies of arm movements in microgravity showed that individuals tend to make errors but that compensation strategies result in adaptations, suggesting that control mechanisms must include peripheral information [3-5]. To date, however, there have been no studies examining a gross motor activity such as running in weightlessness other than using microgravity analogs [6-8]. The objective of this evaluation was to collect biomechanical data from crewmembers during treadmill exercise before and during flight. The goal was to determine locomotive biomechanics similarities and differences between normal and weightless environments. The data will be used to optimize future exercise prescriptions. This project addresses the Critical Path Roadmap risks 1 (Accelerated Bone Loss and Fracture Risk) and 11 (Reduced Muscle Mass, Strength, and Endurance). Data were collected from 7 crewmembers before flight and during their ISS missions. Before launch, crewmembers performed a single data collection session at the NASA Johnson Space Center. Three-dimensional motion capture data were collected for 30 s at speeds ranging from 1.5 to 9.5 mph in 0.5 mph increments 3. Environmental "Omics" of International Space Station: Insights, Significance, and Consequences Science.gov (United States) Venkateswaran, Kasthuri 2016-07-01 The NASA Space Biology program funded two multi-year studies to catalogue International Space Station (ISS) environmental microbiome. The first Microbial Observatory (MO) experiment will generate a microbial census of the ISS surfaces and atmosphere using advanced molecular microbial community analysis "omics" techniques, supported by traditional culture-based methods and state-of-the art molecular techniques. The second MO experiment will measure presence of viral and select bacterial and fungal pathogens on ISS surfaces and correlate their presence on crew. The "omics" methodologies of the MO experiments will serve as the foundation for an extensive microbial census, offering significant insight into spaceflight-induced changes in the populations of beneficial and potentially harmful microbes. The safety of crewmembers and the maintenance of hardware are the primary goals for monitoring microorganisms in this closed habitat. The statistical analysis of the ISS microbiomes showed that three bacterial phyla dominated both in ISS and Earth cleanrooms, but varied in their abundances. While members of Actinobacteria were predominant on ISS, Proteobacteria dominated the Earth cleanrooms. Alpha diversity estimators indicated a significant drop in viable microbial diversity. To better characterize the shared community composition among samples, beta-diversity metrics analysis were conducted. At the bacterial species level characterization, the microbial community composition is strongly associated with sampling site. Results of the study indicate significant differences between ISS and Earth cleanroom microbiomes in terms of community structure and composition. Bacterial strains isolated from ISS surfaces were also tested for their resistance to nine antibiotics using conventional disc method and Vitek 2 system. Most of the Staphylococcus aureus strains were resistant to penicillin. Five strains were specifically resistant to erythromycin and the ermA gene was also 4. Operation of hydrologic data collection stations by the U.S. Geological Survey in 1985 Science.gov (United States) Condes de la Torre, Alberto 1985-01-01 The U.S. Geological Survey (USGS) operated hydrologic data collection stations during fiscal yr 1985 in response to the needs of all levels of Government for hydrologic information. Surface water discharge was determined at 11,076 stations; stage data on streams, reservoirs, and lakes were recorded at 2,141 stations; and surface water quality was determined at 4,166 stations. Groundwater levels were measured at 39,301 stations, and the quality of groundwater was determined at 9,263 stations nationwide. Data on sediment were collected daily at 212 stations and on a periodic basis at 1,027 stations. Information on precipitation quantity was collected at 921 stations, and the quality of precipitation was analyzed at 108 stations. Data collection platforms for satellite telemetry of hydrologic information were used at 1,520 USGS stations. Funding support for the hydrologic stations was derived either solely or from a combination of three major sources--the Geological Survey 's Federal Program appropriation, the Federal-State Cooperative Program, and reimbursements from other Federal agencies. (Author 's abstract) 5. Detection of DNA Damage by Space Radiation in Human Fibroblasts Flown on the International Space Station Science.gov (United States) Lu, Tao; Zhang, Ye; Wong, Michael; Feiveson, Alan; Gaza, Ramona; Stoffle, Nicholas; Wang, Huichen; Wilson, Bobby; Rohde, Larry; Stodieck, Louis; 2017-01-01 Space radiation consists of energetic charged particles of varying charges and energies. Exposure of astronauts to space radiation on future long duration missions to Mars, or missions back to the Moon, is expected to result in deleterious consequences such as cancer and comprised central nervous system (CNS) functions. Space radiation can also cause mutation in microorganisms, and potentially influence the evolution of life in space. Measurement of the space radiation environment has been conducted since the very beginning of the space program. Compared to the quantification of the space radiation environment using physical detectors, reports on the direct measurement of biological consequences of space radiation exposure have been limited, due primarily to the low dose and low dose rate nature of the environment. Most of the biological assays fail to detect the radiation effects at acute doses that are lower than 5 centiSieverts. In a recent study, we flew cultured confluent human fibroblasts in mostly G1 phase of the cell cycle to the International Space Station (ISS). The cells were fixed in space after arriving on the ISS for 3 and 14 days, respectively. The fixed cells were later returned to the ground and subsequently stained with the gamma-H2AX (Histone family, member X) antibody that are commonly used as a marker for DNA damage, particularly DNA double strand breaks, induced by both low-and high-linear energy transfer radiation. In our present study, the gamma-H2AX (Histone family, member X) foci were captured with a laser confocal microscope. To confirm that some large track-like foci were from space radiation exposure, we also exposed, on the ground, the same type of cells to both low-and high-linear energy transfer protons, and high-linear energy transfer Fe ions. In addition, we exposed the cells to low dose rate gamma rays, in order to rule out the possibility that the large track-like foci can be induced by chronic low-linear energy transfer 6. Science.gov (United States) Uchihori, Yukio; Kodaira, Satoshi; Kitamura, Hisashi; Kobayashi, Shingo For future space experiments in the International Space Station (ISS) or other satellites, radiation detectors, A-DREAMS (Active Dosimeter for Radiation Environment and Astronautic Monitoring in Space), using single or multiple silicon semi-conductor detectors have been developed. The first version of the detectors were produced and calibrated with particle accelerators. National Institute of Radiological Sciences has a medical heavy ion accelerator (HIMAC) for cancer therapy and a cyclotron accelerator. The detector was irradiated with high energy heavy ions and protons in HIMAC and the cyclotron and calibrated the energy resolution and linearity for deposited energies of these particles. We are planned to be going to use the new instrument in an international project, the new MATROSHKA experiment which is directed by members in the Institute of Bio-Medical Problem (IBMP) in Russia and German Space Center (DLR) in Germany. In the project, the dose distribution in human torso phantom will be investigated for several months in the ISS. For the project, a new type of the instruments is under development in NIRS and the current situation will be reported in this paper. 7. Main-Reflector Manufacturing Technology for the Deep Space Optical Communications Ground Station Science.gov (United States) Britcliffe, M. J.; Hoppe, D. J. 2001-01-01 The Deep Space Network (DSN) has plans to develop a 10-m-diameter optical communications receiving station. The system uses the direct detection technique, which has much different requirements from a typical astronomical telescope. The receiver must operate in daylight and nighttime conditions. This imposes special requirements on the optical system to reject stray light from the Sun and other sources. One of the biggest challenges is designing a main-reflector surface that meets these requirements and can be produced at a reasonable cost. The requirements for the performance of the reflector are presented. To date, an aspherical primary reflector has been assumed. A reflector with a spherical reflector has a major cost advantage over an aspherical design, with no sacrifice in performance. A survey of current manufacturing techniques for optical mirrors of this type was performed. Techniques including solid glass, lightweight glass, diamond-turned aluminum, and composite mirrors were investigated. 8. Analog FM/FM versus digital color TV transmission aboard space station Science.gov (United States) Hart, M. M. 1985-01-01 Langley Research Center is developing an integrated fault tolerant network to support data, voice, and video communications aboard Space Station. The question of transmitting the video data via dedicated analog channels or converting it to the digital domain for consistancy with the test of the data is addressed. The recommendations in this paper are based on a comparison in the signal-to-noise ratio (SNR), the type of video processing required aboard Space Station, the applicability to Space Station, and how they integrate into the network. 9. Life science research objectives and representative experiments for the space station Science.gov (United States) Johnson, Catherine C. (Editor); Arno, Roger D. (Editor); Mains, Richard (Editor) 1989-01-01 A workshop was convened to develop hypothetical experiments to be used as a baseline for space station designer and equipment specifiers to ensure responsiveness to the users, the life science community. Sixty-five intra- and extramural scientists were asked to describe scientific rationales, science objectives, and give brief representative experiment descriptions compatible with expected space station accommodations, capabilities, and performance envelopes. Experiment descriptions include hypothesis, subject types, approach, equipment requirements, and space station support requirements. The 171 experiments are divided into 14 disciplines. 10. Alkaline water electrolysis technology for Space Station regenerative fuel cell energy storage Science.gov (United States) Schubert, F. H.; Hoberecht, M. A.; Le, M. 1986-01-01 The regenerative fuel cell system (RFCS), designed for application to the Space Station energy storage system, is based on state-of-the-art alkaline electrolyte technology and incorporates a dedicated fuel cell system (FCS) and water electrolysis subsystem (WES). In the present study, emphasis is placed on the WES portion of the RFCS. To ensure RFCS availability for the Space Station, the RFCS Space Station Prototype design was undertaken which included a 46-cell 0.93 cu m static feed water electrolysis module and three integrated mechanical components. 11. Performance Assessment in the PILOT Experiment On Board Space Stations Mir and ISS. Science.gov (United States) Johannes, Bernd; Salnitski, Vyacheslav; Dudukin, Alexander; Shevchenko, Lev; Bronnikov, Sergey 2016-06-01 The aim of this investigation into the performance and reliability of Russian cosmonauts in hand-controlled docking of a spacecraft on a space station (experiment PILOT) was to enhance overall mission safety and crew training efficiency. The preliminary findings on the Mir space station suggested that a break in docking training of about 90 d significantly degraded performance. Intensified experiment schedules on the International Space Station (ISS) have allowed for a monthly experiment using an on-board simulator. Therefore, instead of just three training tasks as on Mir, five training flights per session have been implemented on the ISS. This experiment was run in parallel but independently of the operational docking training the cosmonauts receive. First, performance was compared between the experiments on the two space stations by nonparametric testing. Performance differed significantly between space stations preflight, in flight, and postflight. Second, performance was analyzed by modeling the linear mixed effects of all variances (LME). The fixed factors space station, mission phases, training task numbers, and their interaction were analyzed. Cosmonauts were designated as a random factor. All fixed factors were found to be significant and the interaction between stations and mission phase was also significant. In summary, performance on the ISS was shown to be significantly improved, thus enhancing mission safety. Additional approaches to docking performance assessment and prognosis are presented and discussed. 12. Electrostatics of Granular Material (EGM): Space Station Experiment Science.gov (United States) Marshall, J.; Sauke, T.; Farrell, W. 2000-01-01 Aggregates were observed to form very suddenly in a lab-contained dust cloud, transforming (within seconds) an opaque monodispersed cloud into a clear volume containing rapidly-settling, long hair-like aggregates. The implications of such a "phase change" led to a series of experiments progressing from the lab, to KC-135, followed by micro-g flights on USML-1 and USML-2, and now EGM slated for Space Station. We attribute the sudden "collapse" of a cloud to the effect of dipoles. This has significant ramifications for all types of cloud systems, and additionally implicates dipoles in the processes of cohesion and adhesion of granular matter. Notably, there is the inference that like-charged grains need not necessarily repel if they are close enough together: attraction or repulsion depends on intergranular distance (the dipole being more powerful at short range), and the D/M ratio for each grain, where D is the dipole moment and M is the net charge. We discovered that these ideas about dipoles, the likely pervasiveness of them in granular material, the significance of the D/M ratio, and the idea of mixed charges on individual grains resulting from tribological processes --are not universally recognized in electrostatics, granular material studies, and aerosol science, despite some early seminal work in the literature, and despite commercial applications of dipoles in such modern uses as "Krazy Glue", housecleaning dust cloths, and photocopying. The overarching goal of EGM is to empirically prove that (triboelectrically) charged dielectric grains of material have dipole moments that provide an "always attractive" intergranular force as a result of both positive and negative charges residing on the surfaces of individual grains. Microgravity is required for this experiment because sand grains can be suspended as a cloud for protracted periods, the grains are free to rotate to express their electrostatic character, and Coulombic forces are unmasked. Suspended grains 13. 47 CFR 73.213 - Grandfathered short-spaced stations. Science.gov (United States) 2010-10-01 ... Section 73.213 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... application for authority to operate a Class A station with no more than 3000 watts ERP and 100 meters antenna HAAT (or equivalent lower ERP and higher antenna HAAT based on a class contour distance of 24 km) must... 14. Droplet Combustion Experiments Aboard the International Space Station Science.gov (United States) Dietrich, Daniel L.; Nayagam, Vedha; Hicks, Michael C.; Ferkul, Paul V.; Dryer, Frederick L.; Farouk, Tanvir; Shaw, Benjamin D.; Suh, Hyun Kyu; Choi, Mun Y.; Liu, Yu Cheng; Avedisian, C. Thomas; Williams, Forman A. 2014-10-01 This paper summarizes the first results from isolated droplet combustion experiments performed on the International Space Station (ISS). The long durations of microgravity provided in the ISS enable the measurement of droplet and flame histories over an unprecedented range of conditions. The first experiments were with heptane and methanol as fuels, initial droplet droplet diameters between 1.5 and 5.0 m m, ambient oxygen mole fractions between 0.1 and 0.4, ambient pressures between 0.7 and 3.0 a t m and ambient environments containing oxygen and nitrogen diluted with both carbon dioxide and helium. The experiments show both radiative and diffusive extinction. For both fuels, the flames exhibited pre-extinction flame oscillations during radiative extinction with a frequency of approximately 1 H z. The results revealed that as the ambient oxygen mole fraction was reduced, the diffusive-extinction droplet diameter increased and the radiative-extinction droplet diameter decreased. In between these two limiting extinction conditions, quasi-steady combustion was observed. Another important measurement that is related to spacecraft fire safety is the limiting oxygen index (LOI), the oxygen concentration below which quasi-steady combustion cannot be supported. This is also the ambient oxygen mole fraction for which the radiative and diffusive extinction diameters become equal. For oxygen/nitrogen mixtures, the LOI is 0.12 and 0.15 for methanol and heptane, respectively. The LOI increases to approximately 0.14 (0.14 O 2/0.56 N 2/0.30 C O 2) and 0.17 (0.17 O 2/0.63 N 2/0.20 C O 2) for methanol and heptane, respectively, for ambient environments that simulated dispersing an inert-gas suppressant (carbon dioxide) into a nominally air (1.0 a t m) ambient environment. The LOI is approximately 0.14 and 0.15 for methanol and heptane, respectively, when helium is dispersed into air at 1 atm. The experiments also showed unique burning behavior for large heptane droplets. After the 15. 76 FR 64122 - NASA Advisory Committee; Renewal of NASA's International Space Station Advisory Committee Charter Science.gov (United States) 2011-10-17 ... SPACE ADMINISTRATION NASA Advisory Committee; Renewal of NASA's International Space Station Advisory Committee Charter AGENCY: National Aeronautics and Space Administration (NASA). ACTION: Notice of renewal... imposed on NASA by law. The renewed Charter is for a one-year period ending September 30, 2012. It is... 16. Space station needs, attributes and architectural options: Midterm review, executive overview Science.gov (United States) 1982-01-01 An overview of the mission architecture of the space station based on user requirements is presented. Interest from nonaerospace firms is determined and activities such as spaceborne experiments, space commercialization, U.S. national security, and remote space operations are examined. 17. The International Space Station and the Space Debris Environment: 10 Years On Science.gov (United States) 2009-01-01 For just over a decade the International Space Station (ISS), the most heavily protected vehicle in Earth orbit, has weathered the space debris environment well. Numerous hypervelocity impact features on the surface of ISS caused by small orbital debris and meteoroids have been observed. In addition to typical impacts seen on the large solar arrays, craters have been discovered on windows, hand rails, thermal blankets, radiators, and even a visiting logistics module. None of these impacts have resulted in any degradation of the operation or mission of the ISS. Validating the rate of small particle impacts on the ISS as predicted by space debris environment models is extremely complex. First, the ISS has been an evolving structure, from its original 20 metric tons to nearly 300 metric tons (excluding logistics vehicles) ten years later. Hence, the anticipated space debris impact rate has grown with the increasing size of ISS. Secondly, a comprehensive visual or photographic examination of the complete exterior of ISS has never been accomplished. In fact, most impact features have been discovered serendipitously. Further complications include the estimation of the size of an impacting particle without knowing its mass, velocity, and angle of impact and the effect of shadowing by some ISS components. Inadvertently and deliberately, the ISS has also been the source of space debris. The U.S. Space Surveillance Network officially cataloged 65 debris from ISS from November 1998 to November 2008: from lost cameras, sockets, and tool bags to intentionally discarded equipment and an old space suit. Fortunately, the majority of these objects fall back to Earth quickly with an average orbital lifetime of less than two months and a maximum orbital lifetime of a little more than 15 months. The cumulative total number of debris object-years is almost exactly 10, the equivalent of one piece of debris remaining in orbit for 10 years. An unknown number of debris too small to be 18. Combining Regional Monitoring Stations with Space-based Data to Determine the MEO Satellite Orbit Directory of Open Access Journals (Sweden) WANG Le 2017-05-01 Full Text Available The ground monitoring stations of BeiDou Navigation Satellite System (BDS are regional distribution and the number of these stations is small. The more global ground stations cannot be built in the short term. The ground regional monitoring stations are difficult to observe the global Medium Earth Orbit Satellite (MEO continuously, which leads to low precision of orbits in BDS real-time broadcast ephemeris. Based on the above problems, in view of real time satellite GNSS data of low earth orbit satellite can effectively make up the lack of regional monitoring stations in space overlay, a method is proposed that the GNSS receivers of LEO satellites used as high dynamic space-based monitoring stations combining with the data of the ground monitoring stations to calculate and forecast the MEO satellite orbits. The numeral results show that, using the data of seven regional monitoring stations add 1 to 3 LEO satellites, the precision of the MEO orbit determination can be increased by about 21%, 34% and 55% respectively. It also shows that, the ground regional monitoring stations combined with the data of LEO satellites can effectively improve the orbit precision of MEO satellite. It is suggested that using the data combined with ground stations and LEO satellites to improve the precision of broadcast ephemeris of MEO in BDS. 19. International Space Station-Based Electromagnetic Launcher for Space Science Payloads Science.gov (United States) Jones, Ross M. 2013-01-01 A method was developed of lowering the cost of planetary exploration missions by using an electromagnetic propulsion/launcher, rather than a chemical-fueled rocket for propulsion. An electromagnetic launcher (EML) based at the International Space Station (ISS) would be used to launch small science payloads to the Moon and near Earth asteroids (NEAs) for the science and exploration missions. An ISS-based electromagnetic launcher could also inject science payloads into orbits around the Earth and perhaps to Mars. The EML would replace rocket technology for certain missions. The EML is a high-energy system that uses electricity rather than propellant to accelerate payloads to high velocities. The most common type of EML is the rail gun. Other types are possible, e.g., a coil gun, also known as a Gauss gun or mass driver. The EML could also "drop" science payloads into the Earth's upper 20. Flow Boiling and Condensation Experiment (FBCE) for the International Space Station Science.gov (United States) Mudawar, Issam; Hasan, Mohammad M.; Kharangate, Chirag; O'Neill, Lucas; Konishi, Chris; Nahra, Henry; Hall, Nancy; Balasubramaniam, R.; Mackey, Jeffrey 2015-01-01 The proposed research aims to develop an integrated two-phase flow boiling/condensation facility for the International Space Station (ISS) to serve as primary platform for obtaining two-phase flow and heat transfer data in microgravity. Science.gov (United States) 1985-01-01 Program plans are given for an integrating controller for space station autonomy as well as for controls and displays. The technical approach, facility requirements and candidate facilities, development schedules, and resource requirements estimates are given. 2. The First Five Years of the Alpha Magnetic Spectrometer on the International Space Station CERN Multimedia CERN. Geneva 2016-01-01 In the five years since its installation on the International Space Station, it has collected more than 90 billion cosmic rays. Some of the unexpected results and their possible interpretations will be presented. 3. Early Results from the Floating Potential Probe on the International Space Station Science.gov (United States) Morton, Thomas L.; Ferguson, Dale C. 2001-01-01 This viewgraph presentation provides information on the Floating Potential Probe (FPP) on the International Space Station (ISS). The FPP measures the body voltage (electric potential) of the, and the measurements are then transmitted to Earth. 4. RFP for Smiles and Maxi projects to the Internationale Space Station DEFF Research Database (Denmark) Denver, Troelz; Thuesen, Gøsta; Jørgensen, Finn E 1999-01-01 The document for describes the functionality, the performance and the requirements of the ASC Star Tracker and it includes the RFP for the project's Smiles and Maxi on the International Space Station.......The document for describes the functionality, the performance and the requirements of the ASC Star Tracker and it includes the RFP for the project's Smiles and Maxi on the International Space Station.... 5. Statewide Scent Station Survey for South Carolina Furbearers Annual Report 1994 Data.gov (United States) US Fish and Wildlife Service, Department of the Interior — In 1984, a statewide scent station survey was initiated in SC to provide an index to the relative abundance of terrestrial furbearing animals. This report is from... 6. Information management system: A summary discussion. [for use in the space shuttle sortie, modular space station and TDR satellite Science.gov (United States) Sayers, R. S. 1972-01-01 An information management system is proposed for use in the space shuttle sortie, the modular space station, the tracking data relay satellite and associated ground support systems. Several different information management functions, including data acquisition, transfer, storage, processing, control and display are integrated in the system. 7. Photovoltaic Engineering Testbed: A Facility for Space Calibration and Measurement of Solar Cells on the International Space Station Science.gov (United States) Landis, Geoffrey A.; Bailey, Sheila G.; Jenkins, Phillip; Sexton, J. Andrew; Scheiman, David; Christie, Robert; Charpie, James; Gerber, Scott S.; Johnson, D. Bruce 2001-01-01 The Photovoltaic Engineering Testbed ("PET") is a facility to be flown on the International Space Station to perform calibration, measurement, and qualification of solar cells in the space environment and then returning the cells to Earth for laboratory use. PET will allow rapid turnaround testing of new photovoltaic technology under AM0 conditions. 8. STS-96 Onboard Photo: Departing From the International Space Station (ISS) Science.gov (United States) 1999-01-01 This STS-96 onboard photo of the International Space Station (ISS) was taken from Orbiter Discovery during a fly-around following separation of the two spacecraft. STS-96, the second Space Station assembly and resupply flight, launched on May 27, 1999 for an almost 10 day mission. The Shuttle's SPACEHAB double module carried internal and resupply cargo for station outfitting. Evident in the photo is the newly mounted Russian cargo crane, known as STRELA, which was carried aboard the shuttle in the integrated Cargo Carrir (ICC). 9. Destination Station: Bringing The International Space Station to Communities Across the United States Science.gov (United States) Edgington, Susan 2014-01-01 Today, space is no longer just a field of advanced technological development and of scientific research of excellence, but has become an essential asset for everyday life. Space has spurred countless scientific and technological achievements which are commonly used in aeronautics, medicine, material science and production, in information and communications technology. In parallel, more and more services are carried out through the use of space applications, ranging from detection of natural disasters and environmental monitoring to global navigation and telecommunication. Using space missions to build a better understanding of the universe fulfills our centuries-old curiosity and leads humanity into the future, opening up new frontiers of knowledge. The International Astronautical Congresses have always represented an arena in which issues have been discussed with friendship and among experts: scientists, technicians and managers from universities, agencies, research centres and industry. At the same time it introduces students and young professionals to the field. 10. Verification and Validation of the GNSS Stations at the Prototype Core Site for NASA's Next Generation Space Geodesy Network Science.gov (United States) Desai, S. D.; Gross, J.; Haines, B. J.; Stowers, D. A. 2013-12-01 Two operational GNSS stations, GODN and GODS, were established within 100 m of each other at the prototype core site of NASA's next generation Space Geodesy Network. The planned network will co-locate each of the four space geodetic techniques, GNSS, SLR, VLBI, and DORIS, with the goal of meeting modern requirements for the International Terrestrial Reference Frame. This prototype site is located at NASA's Geophysical and Astronomical Observatory at the Goddard Space Flight Center. The two GNSS stations at the prototype site have been producing tracking data from the GPS, GLONASS, and Galileo constellations since January 17, 2012. We present results from the verification and validation of these two stations, focusing in particular on GPS-based positioning of these two sites to monitor their relative baseline vector. We compare baseline recovery from independent precise point positioning of each station to a network-based approach. We also show the impact on the baseline as well as station repeatability from various improvements to our processing approach, namely the application of empirical antenna calibrations, elevation-dependent weighting, and site-specific troposphere modeling. Together, these approaches have resulted in a factor of two improvement in the precision of the baseline length. The standard deviation of the baseline vector, when using independent precise positioning of each station, is 0.5, 0.4, 1.6, and 0.4 mm in the east, north, up, and length components. The difference between the GPS-based baseline length and that from an independent local tie survey is < 1 mm. 11. Fluid management and its role in the future of Space Station Science.gov (United States) Salzman, J.; Vernon, R.; Hill, M.; Peterson, T. 1986-01-01 Technological challenges and suggested plans for meeting them pertaining to fluid management in the Space Station are discussed. A short overview is given of the major Space Station systems and operations which employ or rely on fluid management, followed by a description of the general system issues and challenges encountered in managing fluids in space. Examples of some current and near term activities directed toward providing the understanding and technologies necessary to overcome relevant problems are presented. Finally, suggested plans for similar but longer range research and development activities are offered. These plans emphasize the requirements and benefits of expanded in-space experiments, with the ultimate aim of using the Space Station as a facility for fluid management research and technology development efforts. 12. Experimentation Using the Mir Station as a Space Laboratory Science.gov (United States) 1998-01-01 Institute for Machine Building (TsNIIMASH) Korolev, Moscow Region, Russia V. Teslenko and N. Shvets Energia Space Corporation Korolev, Moscow...N. Shvets Energia Space Corporation Korolev, Moscow Region, Russia J. A. Drakes/ D. G. Swann, and W. K. McGregor* Sverdrup Technology, Inc...and plume computations. Excitation of the plume gas molecular electronic states by solar radiation, geo- corona Lyman-alpha, and electronic impact 13. Space Station Freedom (SSF) Data Management System (DMS) performance model data base Science.gov (United States) Stovall, John R. 1993-01-01 14. Characterization of Bacilli Isolated from the Confined Environments of the Antarctic Concordia Station and the International Space Station Science.gov (United States) Timmery, Sophie; Hu, Xiaomin; Mahillon, Jacques 2011-05-01 Bacillus and related genera comprise opportunist and pathogen species that can threaten the health of a crew in confined stations required for long-term missions. In this study, 43 Bacilli from confined environments, that is, the Antarctic Concordia station and the International Space Station, were characterized in terms of virulence and plasmid exchange potentials. No specific virulence feature, such as the production of toxins or unusual antibiotic resistance, was detected. Most of the strains exhibited small or large plasmids, or both, some of which were related to the replicons of the Bacillus anthracis pXO1 and pXO2 virulence elements. One conjugative element, the capacity to mobilize and retromobilize small plasmids, was detected in a Bacillus cereus sensu lato isolate. Six out of 25 tested strains acquired foreign DNA by conjugation. Extremophilic bacteria were identified and exhibited the ability to grow at high pH and salt concentrations or at low temperatures. Finally, the clonal dispersion of an opportunist isolate was demonstrated in the Concordia station. Taken together, these results suggest that the virulence potential of the Bacillus isolates in confined environments tends to be low but genetic transfers could contribute to its capacity to spread. 15. European utilisation plan for the International Space Station Science.gov (United States) Wilson, Andrew; Clancy, Paul 2003-02-01 This document was finalised only days before the Space Shuttle Columbia accident of 1 February 2003. It is a comprehensive overview of the projected utilisation by Europe of the ISS, covering the science planned, the facilities under development and planned, and a full database of all the selected proposals in life and physical sciences, space science and technology. It also covers utilisation planning in the commercialisation and education areas. The information given here is an accurate reflection of the European plan as it stood at the end of January 2003. Assuming a successful recovery of the Space Shuttle programme and the re-establishment of regular Shuttle flights to complete, maintain and utilise the ISS, along with continuing support from our Russian partner, the Executive expects this plan to be re-joined in due course, albeit with some time delays occasioned by the loss of Columbia. 16. The Interaction between SKYLON and the International Space Station Science.gov (United States) Hempsell, M. As part of the overall test flight programme of the SKYLON launch system it is planned to include 16 flights to the ISS in order to verify SKYLON's ability to interact with orbital facilities. These flights will test SKYLON equipped with two support systems, the SOFI (SKYLON Orbital Facility Interface), for unpressurised attachment, and the SPLM (SKYLON Passenger/ Logistics Module), for pressurised crew and logistics delivery. The issues involved with integrating the SKYLON test programme with the ISS are explored. Over the course of one year these flights could deliver almost 90 tonnes and 16 station crew but this is not without some problems. The number of flights and the quantity of logistics threaten to overwhelm the ISS, it would require a new docking system to be mounted on the ISS, and the fact they are test flights rather than operational flights may limit the support role they can undertake. 17. Underwater and Dive Station Work-Site Noise Surveys Science.gov (United States) 2008-03-14 ANU magnetic hydraulic drill press was used to extract coupons for this project. The drill press was powered by a hydraulic pressure unit ( HPU ...craft (Figure 6) as the diving platform with the HPU located aft approximately 15 feet from dive station on the starboard side of the boathouse. The...minutes yielded a total in- water noise dose of 45.7%. In air exposure to the HPU for the operator 1-2 feet from the unit was 100 dB (A) re 20µPa 18. Survey of fluoride levels in vended water stations. Science.gov (United States) Jadav, Urvi G; Archarya, Bhavini S; Velasquez, Gisela M; Vance, Bradley J; Tate, Robert H; Quock, Ryan L 2014-01-01 This study sought to measure the fluoride concentration of water derived from vended water stations (VWS) and to identify its clinical implications, especially with regard to caries prevention and fluorosis. VWS and corresponding tap water samples were collected from 34 unique postal zip codes; samples were analyzed in duplicate for fluoride concentration. Average fluoride concentration in VWS water was significantly lower than that of tap water (P water ranged from drinking water may not be receiving optimal caries preventive benefits; thus dietary fluoride supplementation may be indicated. Conversely, to minimize the risk of fluorosis in infants consuming reconstituted infant formula, water from a VWS may be used. 19. How the Station will operate. [operation, management, and maintenance in space Science.gov (United States) Cox, John T. 1988-01-01 Aspects of the upcoming operational phase of the Space Station (SS) are examined. What the crew members will do with their time in their specialized roles is addressed. SS maintenance and servicing and the interaction of the SS Control Center with Johnson Space Center is discussed. The planning of payload operations and strategic planning for the SS are examined. 20. GEROS-ISS: GNSS REflectometry, Radio Occultation and Scatterometry onboard the International Space Station DEFF Research Database (Denmark) Wickert, Jens; Cardellach, Estel; Bandeiras, Jorge 2016-01-01 GEROS-ISS stands for GNSS REflectometry, radio occultation, and scatterometry onboard the International Space Station (ISS). It is a scientific experiment, successfully proposed to the European Space Agency in 2011. The experiment as the name indicates will be conducted on the ISS. The main focus... 1. Forces during Tim Peake's Launch to the International Space Station Science.gov (United States) Mobbs, Robin 2016-01-01 Despite the advanced technology and engineering that has gone onto the International Space Station and other space programmes, the measurement of the force experienced in the spacecraft is tested using a method that is well over 350 years old. The time of oscillation of a simple pendulum, as often investigated in school physics, provides the basis… 2. Preliminary studies for the ORganics Exposure in Orbit (OREOcube) Experiment on the International Space Station NARCIS (Netherlands) Alonzo, Jason; Fresneau, A.; Elsaesser, A.; Chan, J.; Breitenbach, A.; Ehrenfreund, P.; Ricco, A.; Salama, F.; Mattioda, A.; Santos, O.; Cottin, H.; Dartois, E.; d'Hendecourt, L.; Demets, R.; Foing, B.; Martins, Z.; Sephton, M.; Spaans, M.; Quinn, R. Organic compounds that survive in uncommon space environments are animportant astrobiology focus. The ORganics Exposure in Orbit (OREOcube)experiment will investigate, in real time, chemical changes in organiccompounds exposed to low Earth orbit radiation conditions on anInternational Space Station 3. ASIM - an Instrument Suite for the International Space Station DEFF Research Database (Denmark) Neubert, Torsten; Crosby, B.; Huang, T.-Y. 2009-01-01 ASIM (Atmosphere-Space Interactions Monitor) is an instrument suite for studies of severe thunderstorms and their effects on the atmosphere and ionosphere. The instruments are designed to observe transient luminous events (TLEs)—sprites, blue jets and elves—and terrestrial gamma-ray flashes (TGFs... 4. [Reply to “Space Station?” by L.H. Meredith] Space station?colon; Microgravity design is best first step Science.gov (United States) Baker, D. James Les Meredith's recent statement in Eos (September 29, p. 770) on objectives and uses of NASA's proposed space station argues that microgravity research, manufacturing, space physics, astrophysics, and Earth observations are not good justifications for the present zero-gravity design of the station. In my view, he is correct for the general issues of remote sensing, whether it is toward Earth or the planets and beyond. Such observations are indeed better carried out by automated platforms.But in the case of microgravity and space biomedicine, his arguments lose force. Science in the microgravity environment is a field of basic research ranging from materials science to studies of structure of large organic molecules. Today, and for the foreseeable future, the fundamental experiments to be done in the microgravity environment require close interaction with human experimenters, as in a ground-based laboratory. The space station with its microgravity environment and open access to astronauts, provides an environment consistent with the needs of that field of basic research. As we move to manufacturing, the needs for human interaction may become less, but that is probably decades away. 5. Automating security monitoring and analysis for Space Station Freedom's electric power system Science.gov (United States) Dolce, James L.; Sobajic, Dejan J.; Pao, Yoh-Han 1990-01-01 Operating a large, space power system requires classifying the system's status and analyzing its security. Conventional algorithms are used by terrestrial electric utilities to provide such information to their dispatchers, but their application aboard Space Station Freedom will consume too much processing time. A new approach for monitoring and analysis using adaptive pattern techniques is presented. This approach yields an on-line security monitoring and analysis algorithm that is accurate and fast; and thus, it can free the Space Station Freedom's power control computers for other tasks. 6. Technology for Space Station Evolution. Volume 5: Structures and Materials/Thermal Control System Science.gov (United States) 1990-01-01 NASA's Office of Aeronautics and Space Technology (OAST) conducted a workshop on technology for space station evolution on 16-19 Jan. 1990. The purpose of this workshop was to collect and clarify Space Station Freedom technology requirements for evolution and to describe technologies that can potentially fill those requirements. These proceedings are organized into an Executive Summary and Overview and five volumes containing the Technology Discipline Presentations. Volume 5 consists of the technology discipline sections for Structures/Materials and the Thermal Control System. For each technology discipline, there is a level 3 subsystem description, along with papers. 7. Technology for Space Station Evolution. Volume 4: Power Systems/Propulsion/Robotics Science.gov (United States) 1990-01-01 NASA's Office of Aeronautics and Space Technology (OAST) conducted a workshop on technology for space station evolution on 16-19 Jan. 1990. The purpose of this workshop was to collect and clarify Space Station Freedom technology requirements for evolution and to describe technologies that can potentially fill those requirements. These proceedings are organized into an Executive Summary and Overview and five volumes containing the Technology Discipline Presentations. Volume 4 consists of the technology discipline sections for Power, Propulsion, and Robotics. For each technology discipline, there is a Level 3 subsystem description, along with the papers. 8. Technology for Space Station Evolution. Volume 3: EVA/Manned Systems/Fluid Management System Science.gov (United States) 1990-01-01 NASA's Office of Aeronautics and Space Technology (OAST) conducted a workshop on technology for space station evolution 16-19 Jan. 1990 in Dallas, Texas. The purpose of this workshop was to collect and clarify Space Station Freedom technology requirements for evolution and to describe technologies that can potentially fill those requirements. These proceedings are organized into an Executive Summary and Overview and five volumes containing the Technology Discipline Presentations. Volume 3 consists of the technology discipline sections for Extravehicular Activity/Manned Systems and the Fluid Management System. For each technology discipline, there is a Level 3 subsystem description, along with the papers. 9. An environment for the integration and test of the Space Station distributed avionics systems Science.gov (United States) Barry, Thomas; Scheffer, Terrance; Small, L. R. 1988-01-01 An approach to supplying an environment for the integration and test of the Space Station distributed avionics systems is described. Background is included on the development of this concept including the lessons learned from Space Shuttle experience. The environment's relationship to the process flow of the Space-Station verification, from systems development to on-orbit verification, is presented. The uses of the environment's hardware implementation, called Data Management System (DMS) kits, are covered. It is explained how these DMS kits provide a development version of the space-station operational environment and how this environment allows system developers to verify their systems performance, fault detection, and recovery capability. Conclusions on how the use of the DMS kits, in support of this concept, will ensure adequate on-orbit test capability are included. 10. In the footsteps of Columbus European missions to the International Space Station CERN Document Server O'Sullivan, John 2016-01-01 The European Space Agency has a long history of cooperating with NASA in human spaceflight, having developed the Spacelab module for carrying in the payload bay of the Space Shuttle. This book tells of the development of ESA’s Columbus microgravity science laboratory of the International Space Station and the European astronauts who work in it. From the beginning, ESA has been in close collaboration on the ISS, making a significant contribution to the station hardware. Special focus is given to Columbus and Copula as well as station resupply using the ATV. Each mission is also examined individually, creating a comprehensive picture of ESA's crucial involvement over the years. Extensive use of color photographs from NASA and ESA to depict the experiments carried out, the phases of the ISS construction, and the personal stories of the astronauts in space highlights the crucial European work on human spaceflight. 11. The Electric Power System of the International Space Station: A Platform for Power Technology Development Science.gov (United States) Gietl, Eric B.; Gholdston, Edward W.; Manners, Bruce A.; Delventhal, Rex A. 2000-01-01 The electrical power system developed for the International Space Station represents the largest space-based power system ever designed and, consequently, has driven some key technology aspects and operational challenges. The full U.S.-built system consists of a 160-Volt dc primary network, and a more tightly regulated 120-Volt dc secondary network. Additionally, the U.S. system interfaces with the 28-Volt system in the Russian segment. The international nature of the Station has resulted in modular converters, switchgear, outlet panels, and other components being built by different countries, with the associated interface challenges. This paper provides details of the architecture and unique hardware developed for the Space Station, and examines the opportunities it provides for further long-term space power technology development, such as concentrating solar arrays and flywheel energy storage systems. 12. Robust H infinity control design for the space station with structured parameter uncertainty Science.gov (United States) Byun, Kuk-Whan; Wie, Bong; Geller, David; Sunkel, John 1992-01-01 A robust H-infinity control design methodology and its application to a Space Station attitude and momentum control problem are presented. This new approach incorporates nonlinear multi-parameter variations in the state-space formulation of H-infinity control theory. An application of this robust H-infinity control synthesis technique to the Space Station control problem yields a remarkable result in stability robustness with respect to the moments-of-inertia variation of about 73% in one of the structured uncertainty directions. The performance and stability of this new robust H-infinity controller for the Space Station are compared to those of other controllers designed using a standard linear-quadratic-regulator synthesis technique. 13. Microbe space exposure experiment at International Space Station (ISS) proposed in "Tanpopo" mission Science.gov (United States) Yokobori, Shin-Ichi; Yang, Yinjie; Sugino, Tomohiro; Kawaguchi, Yuko; Yoshida, Satoshi; Hashimoto, Hirofumi; Narumi, Issay; Kobayashi, Kensei; Yamagishi, Akihiko Microbes have been collected from high altitude using balloons, aircraft and meteorological rockets since 1936. Spore forming fungi and Bacilli, and Micrococci (probably Deinococci) have been isolated in these experiments. These spores and Deinococci are known by their extremely high resistance against UV, gamma ray, and other radiation. We have also collected microorganisms at high altitude by using an aircraft and balloons. We collected two novel species of the genus Deinococcus, one from top of troposphere (D. aerius) and the other from bottom of stratosphere (D. aetherius). These two species showed high resistance comparable with D. radiodurans R1 to the UV and radiation such as gamma ray. If microbes could be found present even at the higher altitude of low earth orbit (400km), the fact would endorse the possible interplanetary migration of terrestrial life. Indeed, to explain how organisms on the Earth were originated at the quite early stage of the history of Earth, panspermia hypothesis was proposed. Recent findings of the Martian meteorite suggested possible existence of extraterrestrial life, and interplanetary migration of life as well. We proposed the "Tanpopo" mission to examine possible interplanetary migration of microbes, and organic compounds on Japan Experimental Module (JEM) of the International Space Station (ISS). Two of six subthemes in Tanpopo are on the possible interplanetary migration of microbes — capture experiment of microbes at the ISS orbit and space exposure experiment of microbes. In this paper, we focus on the space exposure experiment of microbes. In our proposal, microbes will be exposed to the space environment with/without model-clay materials that might protect microbes from vacuum UV and cosmic rays. Spore of Bacillus sp., and vegetative cells of D. radiodurans and our novel deinococcal species isolated from high altitude are candidates for the exposure experiment. In preliminary experiments, clay-materials tend to increase 14. Microgravity Science Glovebox (MSG), Space Science's Past, Present and Future Aboard the International Space Station (ISS) Science.gov (United States) Spivey, Reggie; Spearing, Scott; Jordan, Lee 2012-01-01 The Microgravity Science Glovebox (MSG) is a double rack facility aboard the International Space Station (ISS), which accommodates science and technology investigations in a "workbench' type environment. The MSG has been operating on the ISS since July 2002 and is currently located in the US Laboratory Module. In fact, the MSG has been used for over 10,000 hours of scientific payload operations and plans to continue for the life of ISS. The facility has an enclosed working volume that is held at a negative pressure with respect to the crew living area. This allows the facility to provide two levels of containment for small parts, particulates, fluids, and gases. This containment approach protects the crew from possible hazardous operations that take place inside the MSG work volume and allows researchers a controlled pristine environment for their needs. Research investigations operating inside the MSG are provided a large 255 liter enclosed work space, 1000 watts of dc power via a versatile supply interface (120, 28, + 12, and 5 Vdc), 1000 watts of cooling capability, video and data recording and real time downlink, ground commanding capabilities, access to ISS Vacuum Exhaust and Vacuum Resource Systems, and gaseous nitrogen supply. These capabilities make the MSG one of the most utilized facilities on ISS. MSG investigations have involved research in cryogenic fluid management, fluid physics, spacecraft fire safety, materials science, combustion, and plant growth technologies. Modifications to the MSG facility are currently under way to expand the capabilities and provide for investigations involving Life Science and Biological research. In addition, the MSG video system is being replaced with a state-of-the-art, digital video system with high definition/high speed capabilities, and with near real-time downlink capabilities. This paper will provide an overview of the MSG facility, a synopsis of the research that has already been accomplished in the MSG, and an 15. A radiological assessment of space nuclear power operations near Space Station Freedom Science.gov (United States) Stevenson, Steve 1990-01-01 In order to accomplish NASA's more ambitious exploration goals, nuclear reactors may be used in the vicinity of Space Station Freedom (SSF) either as power sources for coorbiting platforms or as part of the propulsion system for departing and returning personnel or cargo vehicles. This study identifies ranges of operational parameters, such as parking distances and reactor cooldown times, which would reasonably guarantee that doses to the SSF crew from all radiation sources would be below guidelines recently recommended by the National Council of Radiation Protection and Measurements. The specific scenarios considered include: (1) the launch and return of a nuclear electric propulsion vehicle, (2) the launch and return of a nuclear thermal rocket vehicle, (3) the operation of an SP-100 class reactor on a coorbiting platform, (4) the activation of materials near operating reactors, (5) the storage and handling of radioisotope thermal generator units, and (6) the storage and handling of fresh and previously operated reactors. Portable reactor shield concepts were examined for relaxing the operational constraints imposed by unshielded (for human proximity operations) reactors and that might also be used to provide additional SSF crew protection from natural background radiation. 16. Solar panels for the International Space Station are uncrated and moved in the SSPF Science.gov (United States) 1998-01-01 In the Space Station Processing Facility, the overhead crane slowly moves solar panels intended for the International Space Station (ISS). The panels are the first set of U.S.-provided solar arrays and batteries for ISS, scheduled to be part of mission STS-97 in December 1999. The mission, fifth in the U.S. flights for construction of ISS, will build and enhance the capabilities of the Space Station. It will deliver the solar panels as well as radiators to provide cooling. The Shuttle will spend 5 days docked to the station, which at that time will be staffed by the first station crew. Two space walks will be conducted to complete assembly operations while the arrays are attached and unfurled. A communications system for voice and telemetry also will be installed. At the left of the crane and panels is the Multipurpose Logistics Module (MPLM), the Leonardo A reusable logistics carrier, the MPLM is scheduled to be launched on Space Shuttle Mission STS-100, targeted for April 2000. 17. Does the Underground Sidewall Station Survey Method Meet MHSA ... African Journals Online (AJOL) Grobler, Hendrik or access development, mine boundaries…”(Mine Health and Safety Act No 29 of 1996. Government Gazette 27 May 2011). As is the case on most South African mines, the mining property is adjacent to other mining properties. In such a case where the survey network in a. Point. Date stamp. Y Co-ord. X Co-ord. Elevation. 18. Applicability of NASA Polar Technologies to British Antarctic Survey Halley VI Research Station Science.gov (United States) Flynn, Michael 2005-01-01 From 1993 through 1997 NASA and the National Science Foundation (NSF), developed a variety of environmental infrastructure technologies for use at the Amundsen-Scott South Pole Station. The objective of this program was to reduce the cost of operating the South Pole Station, reduce the environmental impact of the Station, and to increase the quality of life for Station inhabitants. The result of this program was the development of a set of sustainability technologies designed specifically for Polar applications. In the intervening eight years many of the technologies developed through this program have been commercialized and tested in extreme environments and are now available for use throughout Antarctica and circumpolar north. The objective of this document is to provide information covering technologies that might also be applicable to the British Antarctic Survey s (BAS) proposed new Halley VI Research Station. All technologies described are commercially available. 19. Application of Different Statistical Techniques in Integrated Logistics Support of the International Space Station Alpha Science.gov (United States) Sepehry-Fard, F.; Coulthard, Maurice H. 1995-01-01 The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station. 20. Space Station Mission Planning System (MPS) development study. Volume 1: Executive summary Science.gov (United States) Klus, W. J. 1987-01-01 The basic objective of the Space Station (SS) Mission Planning System (MPS) Development Study was to define a baseline Space Station mission plan and the associated hardware and software requirements for the system. A detailed definition of the Spacelab (SL) payload mission planning process and SL Mission Integration Planning System (MIPS) software was derived. A baseline concept was developed for performing SS manned base payload mission planning, and it was consistent with current Space Station design/operations concepts and philosophies. The SS MPS software requirements were defined. Also, requirements for new software include candidate programs for the application of artificial intelligence techniques to capture and make more effective use of mission planning expertise. A SS MPS Software Development Plan was developed which phases efforts for the development software to implement the SS mission planning concept. 1. The particulate environment surrounding the space station: Estimates from the PACS data Science.gov (United States) Green, Byron David 1988-01-01 The objectives of the Particle Analysis Cameras for Shuttle (PACS) experiment (flown on STS-61C) are described and the experiment results are discussed in reference to the expected Space Station environment. Estimates of the sources of particulates surrounding the Space Station were made based on the existing orbital observations data base. Particulates surrounding the shuttle are mostly event related or from the residual release of mass (dust) brought to orbit from the ground. The particulates surrounding the Space Station are likely to arise from additional sources such as operations, docking, erosion, and abrasion. Thus, scaling of the existing data base to long-duration missions in low-Earth orbit requires analysis, modeling, and simulation testing. 2. Maturity of the Bosch CO2 reduction technology for Space Station application Science.gov (United States) Wagner, Robert C.; Carrasquillo, Robyn; Edwards, James; Holmes, Roy 1988-01-01 The Bosch process, which catalytically reduces CO2 with H2 to solid carbon and water, is a promising technique for the reduction of the CO2 removed from the Space Station atmosphere and the subsequent water formation for O2 recovery. A Bosch engineering subsystem prototype CO2 reduction unit was developed to demonstrate the feasibility of the Bosch process as a viable technology for Space Station application. A man-rated prototype unit is then described as part of the ECLSS Technology Demonstrator Program. The goal was to develop a Bosch subsystem that not only meets the performance requirements of two 60 person-day carbon cartridge capacities, but also satisfies inherent man-rated requirements such as offgassing characteristics, fail-safe operation, and ease of maintainability. It is concluded that the technology is at a state of maturity directly applicable to flight status for the NASA Space Station program. 3. Advancements in water vapor electrolysis technology. [for Space Station ECLSS Science.gov (United States) Chullen, Cinda; Heppner, Dennis B.; Sudar, Martin 1988-01-01 The paper describes a technology development program whose goal is to develop water vapor electrolysis (WVE) hardware that can be used selectively as localized topping capability in areas of high metabolic activity without oversizing the central air revitalization system on long-duration manned space missions. The WVE will be used primarily to generate O2 for the crew cabin but also to provide partial humidity control by removing water vapor from the cabin atmosphere. The electrochemically based WVE interfaces with cabin air which is controlled in the following ranges: dry bulb temperature of 292 to 300 K; dew point temperature of 278 to 289 K; relative humidity of 25 to 75 percent; and pressure of 101 + or - 1.4 kPa. Design requirements, construction details, and results for both single-cell and multicell module testing are presented, and the preliminary sizing of a multiperson subsystem is discussed. 4. Advances in Rodent Research Missions on the International Space Station Science.gov (United States) Choi, S. Y.; Ronca, A.; Leveson-Gower, D.; Gong, C.; Stube, K.; Pletcher, D.; Wigley, C.; Beegle, J.; Globus, R. K. 2016-01-01 A research platform for rodent experiment on the ISS is a valuable tool for advancing biomedical research in space. Capabilities offered by the Rodent Research project developed at NASA Ames Research Center can support experiments of much longer duration on the ISS than previous experiments performed on the Space Shuttle. NASAs Rodent Research (RR)-1 mission was completed successfully and achieved a number of objectives, including validation of flight hardware, on-orbit operations, and science capabilities as well as support of a CASIS-sponsored experiment (Novartis) on muscle atrophy. Twenty C57BL6J adult female mice were launched on the Space-X (SpX) 4 Dragon vehicle, and thrived for up to 37 days in microgravity. Daily health checks of the mice were performed during the mission via downlinked video; all flight animals were healthy and displayed normal behavior, and higher levels of physical activity compared to ground controls. Behavioral analysis demonstrated that Flight and Ground Control mice exhibited the same range of behaviors, including eating, drinking, exploratory behavior, self- and allo-grooming, and social interactions indicative of healthy animals. The animals were euthanized on-orbit and select tissues were collected from some of the mice on orbit to assess the long-term sample storage capabilities of the ISS. In general, the data obtained from the flight mice were comparable to those from the three groups of control mice (baseline, vivarium and ground controls, which were housed in flight hardware), showing that the ISS has adequate capability to support long-duration rodent experiments. The team recovered 35 tissues from 40 RR-1 frozen carcasses, yielding 3300 aliquots of tissues to distribute to the scientific community in the U.S., including NASAs GeneLab project and scientists via Space Biology's Biospecimen Sharing Program Ames Life Science Data Archive. Tissues also were distributed to Russian research colleagues at the Institute for 5. The OVRO-LWA: An Extrasolar Space Weather Monitoring Station Science.gov (United States) Hallinan, Gregg; Anderson, Marin M. 2017-05-01 The OVRO-LWA is a new array at Caltech's Owens Valley Radio Observatory which images the entire viewable sky at 24-82 MHz (2400 channels) every 10 seconds. It currently consists of 288 dual-polarization antennas spanning 1.6 km but will eventually expand to 352 antennas spanning 2.5 km, giving a resolution of 5 arcminutes in each all-sky image. One of the primary science goals of the OVRO-LWA is the continuous monitoring of 1000s of stellar systems to search for extrasolar space weather, i.e., the highly variable and circularly polarized radio emission produced during stellar coronal mass ejections and planetary auroral events. Our final design sensitivity will allow all-sky Stokes I and V images with rms noise of 100 mJy in 10 seconds, all-sky Stokes V images with rms noise of 5 mJy every day and Stokes V images with 500 microJy noise after a 1000-hour integration. Simultaneous monitoring of a large fraction of our targets is provided by the Evryscope, a new optical telescope that can image 8,600 square degrees simultaneously, producing two-minute-cadence multi-year light curves for every star brighter than Sloan g = 16.5, thereby investigating the presence of flare emission associated with CMEs detected by the OVRO-LWA. I will introduce the OVRO-LWA and describe the enormous technical and data challenges in delivering continuous all-sky imaging at low frequencies in pursuit of extrasolar space weather. Science.gov (United States) Uhran, Mark L.; Timm, Marc G. 1993-01-01 7. Benchmarks of programming languages for special purposes in the space station Science.gov (United States) Knoebel, Arthur 1986-01-01 Although Ada is likely to be chosen as the principal programming language for the Space Station, certain needs, such as expert systems and robotics, may be better developed in special languages. The languages, LISP and Prolog, are studied and some benchmarks derived. The mathematical foundations for these languages are reviewed. Likely areas of the space station are sought out where automation and robotics might be applicable. Benchmarks are designed which are functional, mathematical, relational, and expert in nature. The coding will depend on the particular versions of the languages which become available for testing. 8. Space Station Freedom - Optimized to support microgravity research and earth observations Science.gov (United States) Bilardo, Vincent J., Jr.; Herman, Daniel J. 1990-01-01 The Space Station Freedom Program is reviewed, with particular attention given to the Space Station configuration, program elements description, and utilization accommodation. Since plans call for the assembly of the initial SSF configuration over a 3-year time span, it is NASA's intention to perform useful research on it during the assembly process. The research will include microgravity experiments and observational sciences. The specific attributes supporting these attempts are described, such as maintainance of a very low microgravity level and continuous orientation of the vehicle to maintain a stable, accurate local-vertical/local-horizontal attitude. 9. Psychiatric components of a Health Maintenance Facility (HMF) on Space Station Science.gov (United States) Santy, Patricia A. 1987-01-01 The operational psychiatric requirements for a comprehensive Health Maintenance Facility (HMF) on a permanently manned Space Station are examined. Consideration is given to the psychological health maintenance program designed for the diagnosis of mental distress in astronauts during flight and for prevention of mental breakdown. The types of mental disorders that can possibly affect the astronauts in flight are discussed, including various organic, psychotic, and affective mental disorders, as well as anxiety, adjustment, and somatoform/dissociative disorders. Special attention is given to therapeutic considerations for psychiatric operations on Space Station, such as restraints, psychopharmacology, psychotherapy, and psychosocial support. 10. STS-100 Onboard Photograph-International Space Station Remote Manipulator System Science.gov (United States) 2001-01-01 This is a Space Shuttle STS-100 mission onboard photograph. Astronaut Scott Parazynski totes a Direct Current Switching Unit while anchored on the end of the Canadian-built Remote Manipulator System (RMS) robotic arm. The RMS is in the process of moving Parazynski to the exterior of the Destiny laboratory (right foreground), where he will secure the spare unit, a critical part of the station's electrical system, to the stowage platform in case future crews will need it. Also in the photograph are the Italian-built Raffaello multipurpose Logistics Module (center) and the new Canadarm2 (lower right) or Space Station Remote Manipulator System. 11. The share flight experiment - An advanced heat pipe radiator for Space Station Science.gov (United States) Alario, J. P.; Otterstedt, P. J. 1986-01-01 This paper reports on the design and thermal vacuum certification testing of the Space Station Heat Pipe Advanced Radiator Element (SHARE) Shuttle flight experiment, with primary emphasis on the heat pipe radiator system. The main objective of the SHARE experiment is to demonstrate suitable 0 g heat transfer performance of a 50 ft-long high-capacity monogroove heat pipe radiator element being developed for possible Space Station application. All of the flight certification tests were achieved, including a maximum heat rejection of 2 kW and thawing of a frozen heat pipe; and uninterrupted operation under cycling environmental and evaporator heat loads. 12. Importance of biological systems in industrial waste treatment potential application to the space station Science.gov (United States) Revis, Nathaniel; Holdsworth, George 1990-01-01 In addition to having applications for waste management issues on planet Earth, microbial systems have application in reducing waste volumes aboard spacecraft. A candidate for such an application is the space station. Many of the planned experiments generate aqueous waste. To recycle air and water the contaminants from previous experiments must be removed before the air and water can be used for other experiments. This can be achieved using microorganisms in a bioreactor. Potential bioreactors (inorganics, organics, and etchants) are discussed. Current technologies that may be applied to waste treatment are described. Examples of how biological systems may be used in treating waste on the space station. 13. Life Sciences Research Facility automation requirements and concepts for the Space Station Science.gov (United States) Rasmussen, Daryl N. 1986-01-01 An evaluation is made of the methods and preliminary results of a study on prospects for the automation of the NASA Space Station's Life Sciences Research Facility. In order to remain within current Space Station resource allocations, approximately 85 percent of planned life science experiment tasks must be automated; these tasks encompass specimen care and feeding, cage and instrument cleaning, data acquisition and control, sample analysis, waste management, instrument calibration, materials inventory and management, and janitorial work. Task automation will free crews for specimen manipulation, tissue sampling, data interpretation and communication with ground controllers, and experiment management. 14. Monitoring of International Space Station Telemetry Using Shewhart Control Charts Science.gov (United States) Fitch, Jeffery T.; Simon, Alan L.; Gouveia, John A.; Hillin, Andrew M.; Hernandez, Steve A. 2012-01-01 Shewhart control charts have been established as an expedient method for analyzing dynamic, trending data in order to identify anomalous subsystem performance as soon as such performance would exceed a statistically established baseline. Additionally, this leading indicator tool integrates a selection methodology that reduces false positive indications, optimizes true leading indicator events, minimizes computer processor unit duty cycles, and addresses human factor concerns (i.e., the potential for flight-controller data overload). This innovation leverages statistical process control, and provides a relatively simple way to allow flight controllers to focus their attention on subtle system changes that could lead to dramatic off-nominal system performance. Finally, this capability improves response time to potential hardware damage and/or crew injury, thereby improving space flight safety. Shewhart control charts require normalized data. However, the telemetry from the ISS Early External Thermal Control System (EETCS) was not normally distributed. A method for normalizing the data was implemented, as was a means of selecting data windows, the number of standard deviations (Sigma Level), the number of consecutive points out of limits (Sequence), and direction (increasing or decreasing trend data). By varying these options, and treating them like dial settings, the number of nuisance alerts and leading indicators were optimized. The goal was to capture all leading indicators while minimizing the number of nuisances. Lean Six Sigma (L6S) design of experiment methodologies were employed. To optimize the results, Perl programming language was used to automate the massive amounts of telemetry data, control chart plots, and the data analysis. 15. Long-Term International Space Station (ISS) Risk Reduction Activities Science.gov (United States) Fodroci, M. P.; Gafka, G. K.; Lutomski, M. G.; Maher, J. S. 2012-01-01 As the assembly of the ISS nears completion, it is worthwhile to step back and review some of the actions pursued by the Program in recent years to reduce risk and enhance the safety and health of ISS crewmembers, visitors, and space flight participants. While the initial ISS requirements and design were intended to provide the best practicable levels of safety, it is always possible to further reduce risk - given the determination, commitment, and resources to do so. The following is a summary of some of the steps taken by the ISS Program Manager, by our International Partners, by hardware and software designers, by operational specialists, and by safety personnel to continuously enhance the safety of the ISS, and to reduce risk to all crewmembers. While years of work went into the development of ISS requirements, there are many things associated with risk reduction in a Program like the ISS that can only be learned through actual operational experience. These risk reduction activities can be divided into roughly three categories: Areas that were initially noncompliant which have subsequently been brought into compliance or near compliance (i.e., Micrometeoroid and Orbital Debris [MMOD] protection, acoustics) Areas where initial design requirements were eventually considered inadequate and were subsequently augmented (i.e., Toxicity Hazard Level- 4 [THL] materials, emergency procedures, emergency equipment, control of drag-throughs) Areas where risks were initially underestimated, and have subsequently been addressed through additional mitigation (i.e., Extravehicular Activity [EVA] sharp edges, plasma shock hazards) Due to the hard work and cooperation of many parties working together across the span of more than a decade, the ISS is now a safer and healthier environment for our crew, in many cases exceeding the risk reduction targets inherent in the intent of the original design. It will provide a safe and stable platform for utilization and discovery for years 16. Amateur Radio On The International Space Station (ARISS) - The First Educational Outreach Program On ISS Science.gov (United States) Conley, Carolynn Lee; Bauer, Frank H.; Brown, Deborah A.; White, Rosalie 2002-01-01 Amateur Radio on the International Space Station (ARISS) represents the first educational outreach program that is flying on the International Space Station (ISS). The astronauts and cosmonauts will work hard on the International Space Station, but they plan to take some time off for educational activities with schools. The National Aeronautics and Space Administration s (NASA s) Education Division is a major supporter and sponsor of this student outreach activity on the ISS. This meets NASA s educational mission objective: To inspire the next generation of explorers.. .as only NASA can. The amateur radio community is helping to enrich the experience of those visiting and living on the station as well as the students on Earth. Through ARISS sponsored hardware and activities, students on Earth get a first-hand feel of what it is like to live and work in space. This paper will discuss the educational outreach accomplishments of ARISS, the school contact process, the ARISS international cooperation and volunteers, and ISS Ham radio plans for the future. 17. Research on determination of the scale of parking space on High Speed Rail Station, using East Ji’nan Station as an example Science.gov (United States) Lv, Jie; Guo, Jianmin; Zhang, Yibin 2017-08-01 With the rapid growth of High-Speed Railway network in China, more and more stations has been designed. Based on the work of planning practice, this article has analyzed the influencing factors of proper supplement of parking space in High-Speed Railway Stations with analogy and parking turnover method, and taking East Ji’nan High-Speed Railway Station as an example to give recommended values. 18. Life Science on the International Space Station Using the Next Generation of Cargo Vehicles Science.gov (United States) Robinson, J. A.; Phillion, J. P.; Hart, A. T.; Comella, J.; Edeen, M.; Ruttley, T. M. 2011-01-01 With the retirement of the Space Shuttle and the transition of the International Space Station (ISS) from assembly to full laboratory capabilities, the opportunity to perform life science research in space has increased dramatically, while the operational considerations associated with transportation of the experiments has changed dramatically. US researchers have allocations on the European Automated Transfer Vehicle (ATV) and Japanese H-II Transfer Vehicle (HTV). In addition, the International Space Station (ISS) Cargo Resupply Services (CRS) contract will provide consumables and payloads to and from the ISS via the unmanned SpaceX (offers launch and return capabilities) and Orbital (offers only launch capabilities) resupply vehicles. Early requirements drove the capabilities of the vehicle providers; however, many other engineering considerations affect the actual design and operations plans. To better enable the use of the International Space Station as a National Laboratory, ground and on-orbit facility development can augment the vehicle capabilities to better support needs for cell biology, animal research, and conditioned sample return. NASA Life scientists with experience launching research on the space shuttle can find the trades between the capabilities of the many different vehicles to be confusing. In this presentation we will summarize vehicle and associated ground processing capabilities as well as key concepts of operations for different types of life sciences research being launched in the cargo vehicles. We will provide the latest status of vehicle capabilities and support hardware and facilities development being made to enable the broadest implementation of life sciences research on the ISS. 19. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR Science.gov (United States) Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C. 1989-01-01 A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated. 20. Microwave energy transmission test toward the SPS using the space station Energy Technology Data Exchange (ETDEWEB) Kaya, N.; Matsumoto, H.; Miyatake, S.; Kimura, I.; Nagatomo, M. 1986-12-01 An outline of a project METT (Microwave Energy Transmission Test) using the Space Station is described. The objectives of the METT are to develop and test the technology of microwave energy transmission for the future Solar Power Satellite (SPS), and to estimate the environmental effects of the high power microwaves on the ionosphere and the atmosphere. Energy generated with solar cells is transmitted from a transmitting antenna on the bus platform near the Space Station to a rectenna on the sub-satellite or the ground station in order to test the total efficiency and the functions of the developed system of the energy transmission. Plasma similar to that in the D and E layers in the ionosphere is produced in a large balloon opened on the sub-satellite in order to investigate possible interactions between the SPS microwave and the ionospheric plasma and to determine the maximum power density of the microwave beam which passes through the ionosphere. 1. Microwave energy transmission test toward the SPS using the Space Station Energy Technology Data Exchange (ETDEWEB) Kaya, N.; Matsumoto, H.; Miyatake, S.; Kimura, I.; Nagatomo, M. 1985-01-01 An outline of a project METT (Microwave Energy Transmission Test) using the Space Station is described. The objectives of the METT are to develop and test the technology of microwave energy transmission for the future Solar Power Satellite (SPS), and to estimate the environmental effects of the high power microwaves on the ionosphere and the atmosphere. Energy generated with solar cells is transmitted from a transmitting antenna on the bus platform near the Space Station to a rectenna on the sub-satellite or the ground station in order to test the total efficiency and the functions of the development system of the energy transmission. Plasma similar to that in the D and E layers in the ionosphere is produced in a large balloon opened on the sub-satellite in order to investigate possible interactions between the SPS microwave and the ionospheric plasma and to determine the maximum power density of the microwave beam which passes through the ionosphere. 9 references. 2. Conceptual Inquiry of the Space Shuttle and International Space Station GNC Flight Controllers Science.gov (United States) Kranzusch, Kara 2007-01-01 The concept of Mission Control was envisioned by Christopher Columbus Kraft in the 1960's. Instructed to figure out how to operate human space flight safely, Kraft envisioned a room of sub-system experts troubleshooting problems and supporting nominal flight activities under the guidance of one Flight Director who is responsible for the success of the mission. To facilitate clear communication, MCC communicates with the crew through a Capsule Communicator (CAPCOM) who is an astronaut themselves. Gemini 4 was the first mission to be supported by such a MCC and successfully completed the first American EVA. The MCC seen on television is called the Flight Control Room (FCR, pronounced ficker) or otherwise known as the front room. While this room is the most visible aspect, it is a very small component of the entire control center. The Shuttle FCR is known as the White FCR (WFCR) and Station's as FCR-1. (FCR-1 was actually the first FCR built at JSC which was used through the Gemini, Apollo and Shuttle programs until the WFCR was completed in 1992. Afterwards FCR-1 was refurbished first for the Life Sciences Center and then for the ISS in 2006.) Along with supporting the Flight Director, each FCR operator is also the supervisor for usually two or three support personnel in a back room called the Multi-Purpose Support Room (MPSR, pronounced mipser). MPSR operators are more deeply focused on their specific subsystems and have the responsible to analyze patterns, and diagnose and assess consequences of faults. The White MPSR (WMPSR) operators are always present for Shuttle operations; however, ISS FCR controllers only have support from their Blue MPSR (BMPSR) while the Shuttle is docked and during critical operations. Since ISS operates 24-7, the FCR team reduces to a much smaller Gemini team of 4-5 operators for night and weekend shifts when the crew is off-duty. The FCR is also supported by the Mission Evaluation Room (MER) which is a collection of contractor engineers 3. Development of an Automated Requirements Management System for the Space Station Freedom Program Science.gov (United States) Giffin, Geoff 1989-01-01 The Automated Requirements Management System, which is being developed to support traceability and documentation of Space Station Freedom requirements, is described. The objectives of requirements management are validation and verification. Other benefits include comprehensive analytical capabilities, commonality and timeliness of requirements information availability across the program, and the reduction of information duplication and overlap. 4. Simultaneous investigation of galactic cosmic rays on aircrafts and on International Space Station Czech Academy of Sciences Publication Activity Database Dachev, T.; Spurný, František; Reitz, G.; Tomov, B. T.; Dimitrov, P. G.; Matviichuk, Y. N. 2005-01-01 Roč. 36, č. 9 (2005), s. 1665-1670 ISSN 0273-1177 Institutional research plan: CEZ:AV0Z10480505 Keywords : cosmic rays * dosimetry * space station Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 0.706, year: 2005 5. Development of the Space Station Freedom Refrigerator/Freezer and Freezer Science.gov (United States) Zelon, Jon; Saiz, John; Glaser, Peter 1991-01-01 This paper presents the current design configuration of the Space Station Freedom (SSF) Refrigerator/Freezer and Freezer (R/F and F) systems. In addition, this paper establishes the current analyses/trade study activity related to refrigeration system design and defines Environmental Control and Life Support System (ECLSS) interfaces, anticipated heat loads, maintenance approaches and safety concerns. 6. A multi-purpose tactile vest for astronauts in the international space station NARCIS (Netherlands) Erp, J.B.F. van; Veen, H.A.H.C. van 2003-01-01 During a 10 day taxiflight to the International Space Station (ISS) in 2004, Dutch astronaut André Kuipers is scheduled to test a multi-purpose vibrotactile vest. The main application of the vest is supporting the astronaut's orientation awareness. To this end, we employ an artificial gravity vector 7. International Space Station Science Information for Public Release on the NASA Web Portal Science.gov (United States) Robinson, Julie A.; Tate, Judy M. 2009-01-01 This document contains some of the descriptions of payload and experiment related to life support and habitation. These describe experiments that have or are scheduled to fly on the International Space Station. There are instructions, and descriptions of the fields that make up the database. The document is arranged in alphabetical order by the Payload 8. NASA Human Research Program (HRP). International Space Station Medical Project (ISSMP) Science.gov (United States) Sams, Clarence F. 2009-01-01 This viewgraph presentation describes the various flight investigations performed on the International Space Station as part of the NASA Human Research Program (HRP). The evaluations include: 1) Stability; 2) Periodic Fitness Evaluation with Oxygen Uptake Measurement; 3) Nutrition; 4) CCISS; 5) Sleep; 6) Braslet; 7) Integrated Immune; 8) Epstein Barr; 9) Biophosphonates; 10) Integrated cardiovascular; and 11) VO2 max. 9. Crewmember and mission control personnel interactions during International Space Station missions. Science.gov (United States) Kanas, Nick A; Salnitskiy, Vyacheslav P; Boyd, Jennifer E; Gushin, Vadim I; Weiss, Daniel S; Saylor, Stephanie A; Kozerenko, Olga P; Marmar, Charles R 2007-06-01 Reports from astronauts and cosmonauts, studies from space analogue environments on Earth, and our previous research on the Mir Space Station have identified a number of psychosocial issues that can lead to problems during long-duration space missions. Three of these issues (time effects, displacement, leader role) were studied during a series of long-duration missions to the International Space Station (ISS). As in our previous Mir study, mood and group climate questions from the Profile of Mood States or POMS, the Group Environment Scale or GES, and the Work Environment Scale or WES were completed weekly by 17 ISS crewmembers (15 men, 2 women) in space and 128 American and Russian personnel in mission control. The results did not support the presence of decrements in mood and group cohesion during the 2nd half of the missions or in any specific quarter. The results did support the predicted displacement of negative feelings to outside supervisors in both crew and mission control subjects on all six questionnaire subscales tested. Crewmembers related cohesion in their group to the support role of their commander. For mission control personnel, greater cohesion was linked to the support role as well as to the task role of their leader. The findings from our previous study on the Mir Space Station were essentially replicated on board the ISS. The findings suggest a number of countermeasures for future on-orbit missions, some of which may not be relevant for expeditionary missions (e.g., to Mars). 10. NASA philosophy concerning space stations as operations centers for construction and maintenance of large orbiting energy systems Science.gov (United States) Freitag, R. F. 1976-01-01 Future United States plans for manned space-flight activities are summarized, emphasizing the long-term goals of achieving permanent occupancy and limited self-sufficiency in space. NASA-sponsored studies of earth-orbiting Space Station concepts are reviewed along with lessons learned from the Skylab missions. Descriptions are presented of the Space Transportation System, the Space Construction Base, and the concept of space industrialization (the processing and manufacturing of goods in space). Future plans for communications satellites, solar-power satellites, terrestrial observations from space stations, and manned orbital-transfer vehicles are discussed. 11. Successful Space Flight of High-Speed InGaAs Photodiode Onboard the International Space Station Science.gov (United States) Joshi, Abhay; Prasad, Narasimha; Datta, Shubbashish 2017-01-01 Photonic systems are required for several space applications, including satellite communication links and lidar sensors. Although such systems are ubiquitous in terrestrial applications, deployment in space requires the constituent components to withstand extreme environmental conditions, including wide operating temperature range, mechanical shock and vibration, and radiation. These conditions are significantly more stringent than alternative standards, namely Bellcore GR-468 and MIL-STD 883, which may be satisfied by typical, commercially available, photonic components. Furthermore, it is very difficult to simultaneously reproduce several aspects of space environment, including exposure to galactic cosmic rays (GCR), in a laboratory. Therefore, it is necessary to operate key photonic components in space to achieve a technology readiness level of 7 and beyond. Accordingly, the International Space Station (ISS) provides an invaluable test bed for qualifying such components for space missions. We present a fiber-pigtailed photodiode module, having a -3 dB bandwidth of 16.8 GHz, that survived 18 months on the ISS as part of the Materials International Space Station Experiment (MISSE) 7 mission. This module was launched by NASA Langley Research Center on November 16, 2009 on the Space Shuttle Atlantis (STS-129), as part of their lidar transceiver components. While orbiting on the ISS in a passive experiment container, the photodiode module was exposed to extreme temperature cycling from -157 degrees Celsius to +121 degrees Celsius 16 times a day, proton radiation from the inner Van Allen belt at the South Atlantic Anomaly, and galactic cosmic rays. The module returned to Earth on the Space Shuttle Endeavor (STS-134) on June 1, 2011 for further characterization. The post flight test of the photodiode module, shown in Fig. 1a, demonstrates no change in the module's performance, thus proving its survivability during launch and in space environment. 12. Space Stations. Science.gov (United States) 1980-04-01 0-4 rUIC 0) U) L I 23 *e0l Cr.4--. C -o. E 0 ~ ~ ~ -. f~ z; * m zL w G 0 ~ ~ ~ ~ %’ UW u CE- . tU U z TE 0. 3 z z 000 U) *X < 0~L z cU .LUL z 0 z c...00-C 0-2 a) --0 D a - - - t- 0C*’ I1 C "t .0 0 -: > l0 0- 100 m ca ’-00 0 L)- Dino m 14’- ini >0 > 0 T! 44->1 C .’C.L<W ’-’iD < 0 L- L)0 Q C-’- )4) -1 13. Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data Vb0 Data.gov (United States) National Aeronautics and Space Administration — The Non-Quality Controlled Lightning Imaging Sensor (LIS) on International Space Station (ISS) Science Data were collected by the LIS instrument on the ISS used to... 14. Impacting Space Station Freedom design with operations and safety requirements - An availability process Science.gov (United States) Garegnani, Jerry J.; Schondorf, Steven Y. 1990-01-01 The unusually long mission duration of Space Station Freedom leads to operations costs that have significant impacts on life-cycle cost relative to previous manned space programs. Maintaining an affordable program requires that operations costs be considered throughout the design process. An appropriate means of impacting the design with operations concerns is to specify requirements that ensure operational effectiveness when implemented. The Space Station Freedom Program has developed a process defining such requirements. It focuses on specifying functional profiles and allocating resources such that designers gain a better understanding of the operational envelope in which their systems must perform. This paper examines the details of the process, where it came from, and why it is effective. 15. MISSE7: Building a Permanent Environmental Testbed for the International Space Station Science.gov (United States) Jenkins, Phillip P.; Walters, Robert J.; Krasowski, Michael J.; Chapman, John J.; Ballard, Perry G.; Vasquez, John A.; Mahony, Denis R.; Lacava, Susie N.; Braun, William R.; Skalitzky, Robert; Prokop, Norman F.; Flatico, Joseph M.; Greer, Lawrence C.; Gibson, Karen B.; Kinard, William H.; Pippin, H. Gary 2009-01-01 The Materials on the International Space Station Experiments (MISSE) provide low-cost material exposure experiments on the exterior of the International Space Station (ISS). The original concept for a suitcase-like box bolted to the ISS to passively expose materials to space has grown to include increasingly complex in situ characterization. As the ISS completes construction, the facilities available to MISSE experiments will increase dramatically. MISSE7 is the first MISSE to take advantage of this new infrastructure. In addition to material exposure, MISSE7 will include characterization of single-event radiation effects on electronics and solar cell performance in LEO. MISSE7 will exploit the ISS Express Logistics Carrier power and data capabilities and will leave behind a MISSE specific infrastructure for future missions. 16. An overview of the space medicine program and development of the Health Maintenance Facility for Space Station Science.gov (United States) Pool, Sam Lee 1988-01-01 Because the prolonged stay on board the Space Station will increase the risk of possible inflight medical problems from that on Skylab missions, the Health Maintenance Facility (HMF) planned for the Space Station is much more sophisticated than the small clinics of the Skylab missions. The development of the HMF is directed by the consideration of three primary factors: prevention, diagnosis, and treatment of injuries and illnesses that may occur in flight. The major components of the HMF include the clinical laboratory, pharmacy, imaging system, critical-care system, patient-restraint system, data-management system, exercise system, surgical system, electrophysiologic-monitoring system, introvenous-fluid system, dental system, and hyperbaric-treatment-support system. 17. International Space Station (ISS) Metal Oxide (MetOx) Odor Anomaly Science.gov (United States) Prokhorov, Kimberlee; Lewis, John; Graf, John; Perry, Jay 2004-01-01 On occasion, seemingly normal operations can have significant effects upon the closed environment of the International Space Station (ISS). An example of such a case occurred on February 20, 2002 when a nominal Metal Oxide (MetOx) canister regeneration operation onboard the ISS resulted in an unexpected, foul odor that affected the crew and station operations. A case study summarizing the root cause for the event and steps taken to ensure that future MetOx regeneration operations proceed safely is presented. Included in the summary are engineering analyses and environmental monitoring results supporting the root cause assessment as well as testing conducted and flight operations changes implemented to ensure safe operations. 18. Science and payload options for animal and plant research accommodations aboard the early Space Station Science.gov (United States) Hilchey, John D.; Arno, Roger D.; Gustan, Edith; Rudiger, C. E. 1986-01-01 The resources to be allocated for the development of the Initial Operational Capability (IOC) Space Station Animal and Plant Research Facility and the Growth Station Animal and Plant Vivarium and Laboratory may be limited; also, IOC accommodations for animal and plant research may be limited. An approach is presented for the development of Initial Research Capability Minilabs for animal and plant studies, which in appropriate combination and sequence can meet requirements for an evolving program of research within available accommodations and anticipated budget constraints. 19. Cultural differences in crewmembers and mission control personnel during two space station programs. Science.gov (United States) Boyd, Jennifer E; Kanas, Nick A; Salnitskiy, Vyacheslav P; Gushin, Vadim I; Saylor, Stephanie A; Weiss, Daniel S; Marmar, Charles R 2009-06-01 Cultural differences among crewmembers and mission control personnel can affect long-duration space missions. We examine three cultural contrasts: national (American vs. Russian); occupational (crewmembers vs. mission control personnel); and organizational [Mir space station vs. International Space Station (ISS)]. The Mir sample included 5 American astronauts, 8 Russian cosmonauts, and 42 American and 16 Russian mission control personnel. The ISS sample included 8 astronauts, 9 cosmonauts, and 108 American and 20 Russian mission control personnel. Subjects responded to mood and group climate questions on a weekly basis. The ISS sample also completed a culture and language questionnaire. Crewmembers had higher scores on cultural sophistication than mission control personnel, especially American mission control. Cultural sophistication was not related to mood or social climate. Russian subjects reported greater language flexibility than Americans. Crewmembers reported better mood states than mission control, but both were in the healthy range. There were several Russian-American differences in social climate, with the most robust being higher work pressure among Americans. Russian-American social climate differences were also found in analyses of crew only. Analyses showed Mir-ISS differences in social climate among crew but not in the full sample. We found evidence for national, occupational, and organizational cultural differences. The findings from the Mir space station were essentially replicated on the ISS. Alterations to the ISS to make it a more user-friendly environment have still not resolved the issue of high levels of work pressure among the American crew. 20. A Study of Cosmic Ray Secondaries Induced by the Mir Space Station Using AMS-01 CERN Document Server Aguilar, M.; Allaby, J.; Alpat, B.; Ambrosi, G.; Anderhub, H.; Ao, L.; Arefiev, A.; Azzarello, P.; Babucci, E.; Baldini, L.; Basile, M.; Barancourt, D.; Barao, F.; Barbier, G.; Barreira, G.; Battiston, R.; Becker, R.; Becker, U.; Bellagamba, L.; Bene, P.; Berdugo, J.; Berges, P.; Bertucci, B.; Biland, A.; Bizzaglia, S.; Blasko, S.; Boella, G.; Boschini, M.; Bourquin, M.; Brocco, L.; Bruni, G.; Buenerd, M.; Burger, J.D.; Burger, W.J.; Cai, X.D.; Camps, C.; Cannarsa, P.; Capell, M.; Carosi, G.; Casadei, D.; Casaus, J.; Castellini, G.; Cecchi, C.; Chang, Y.H.; Chen, H.F.; Chen, H.S.; Chen, Z.G.; Chernoplekov, N.A.; Chiueh, T.H.; Cho, K.; Choi, M.J.; Choi, Y.Y.; Chuang, Y.L.; Cindolo, F.; Commichau, V.; Contin, A.; Cortina-Gil, E.; Cristinziani, M.; da Cunha, J.P.; Dai, T.S.; Delgado, C.; Demirkoz, Bilge; Deus, J.D.; Dinu, N.; Djambazov, L.; D'Antone, I.; Dong, Z.R.; Emonet, P.; Engelberg, J.; Eppling, F.J.; Eronen, T.; Esposito, G.; Extermann, P.; Favier, J.; Fiandrini, E.; Fisher, P.H.; Fluegge, G.; Fouque, N.; Galaktionov, Iouri; Gervasi, M.; Giusti, P.; Grandi, D.; Grimm, O.; Gu, W.Q.; Hangarter, K.; Hasan, A.; Henning, R.; Hermel, V.; Hofer, H.; Huang, M.A.; Hungerford, W.; Ionica, M.; Ionica, R.; Jongmanns, M.; Karlamaa, K.; Karpinski, W.; Kenney, G.; Kenny, J.; Kim, D.H.; Kim, G.N.; Kim, K.S.; Kim, M.Y.; Klimentov, A.; Kossakowski, R.; Koutsenko, V.; Kraeber, M.; Laborie, G.; Laitinen, T.; Lamanna, G.; Lanciotti, E.; Laurenti, G.; Lebedev, A.; Lechanoine-Leluc, C.; Lee, M.W.; Lee, S.C.; Levi, G.; Levtchenko, P.; Liu, C.L.; Liu, H.T.; Lopes, I.; Lu, G.; Lu, Y.S.; Lubelsmeyer, K.; Luckey, David; Lustermann, W.; Mana, C.; Margotti, A.; Mayet, F.; McNeil, R.R.; Meillon, B.; Menichelli, M.; Mihul, A.; Monreal, B.; Mourao, A.; Mujunen, A.; Palmonari, F.; Papi, A.; Park, H.B.; Park, W.H.; Pauluzzi, M.; Pauss, F.; Perrin, E.; Pesci, A.; Pevsner, A.; Pimenta, M.; Plyaskin, V.; Pojidaev, V.; Pohl, M.; Postolache, V.; Produit, N.; Rancoita, P.G.; Rapin, D.; Raupach, F.; Ren, D.; Ren, Z.; Ribordy, M.; Richeux, J.P.; Riihonen, E.; Ritakari, J.; Ro, S.; Roeser, U.; Rossin, C.; Sagdeev, R.; Santos, D.; Sartorelli, G.; Sbarra, C.; Schael, S.; Schultz von Dratzig, A.; Schwering, G.; Scolieri, G.; Seo, E.S.; Shin, J.W.; Shoumilov, E.; Shoutko, V.; Siedling, R.; Son, D.; Song, T.; Steuer, M.; Sun, G.S.; Suter, H.; Tang, X.W.; Ting, Samuel C.C.; Ting, S.M.; Tornikoski, M.; Torsti, J.; Trumper, J.; Ulbricht, J.; Urpo, S.; Valtonen, E.; Vandenhirtz, J.; Velcea, F.; Velikhov, E.; Verlaat, B.; Vetlitsky, I.; Vezzu, F.; Vialle, J.P.; Viertel, G.; Vite, Davide F.; Von Gunten, H.; Waldemeier Wicki, S.; Wallraff, W.; Wang, B.C.; Wang, J.Z.; Wang, Y.H.; Wiik, K.; Williams, C.; Wu, S.X.; Xia, P.C.; Yan, J.L.; Yan, L.G.; Yang, C.G.; Yang, J.; Yang, M.; Ye, S.W.; Yeh, P.; Xu, Z.Z.; Zhang, H.Y.; Zhang, Z.P.; Zhao, D.X.; Zhu, G.Y.; Zhu, W.Z.; Zhuang, H.L.; Zichichi, A.; Zimmermann, B.; Zuccon, P. 2004-01-01 The Alpha Magnetic Spectrometer (AMS-02) is a high energy particle physics experiment that will study cosmic rays in the $\\sim 100 \\mathrm{MeV}$ to $1 \\mathrm{TeV}$ range and will be installed on the International Space Station (ISS) for at least 3 years. A first version of AMS-02, AMS-01, flew aboard the space shuttle \\emph{Discovery} from June 2 to June 12, 1998, and collected $10^8$ cosmic ray triggers. Part of the \\emph{Mir} space station was within the AMS-01 field of view during the four day \\emph{Mir} docking phase of this flight. We have reconstructed an image of this part of the \\emph{Mir} space station using secondary $\\pi^-$ and $\\mu^-$ emissions from primary cosmic rays interacting with \\emph{Mir}. This is the first time this reconstruction was performed in AMS-01, and it is important for understanding potential backgrounds during the 3 year AMS-02 mission. 1. The Logistic Path from the International Space Station to the Moon and Beyond Science.gov (United States) Watson, J. K.; Dempsey, C. A.; Butina, A. J., Sr. 2005-01-01 The period from the loss of the Space Shuttle Columbia in February 2003 to resumption of Space Shuttle flights, planned for May 2005, has presented significant challenges to International Space Station (ISS) maintenance operations. Sharply curtailed upmass capability has forced NASA to revise its support strategy and to undertake maintenance activities that have significantly expanded the envelope of the ISS maintenance concept. This experience has enhanced confidence in the ability to continue to support ISS in the period following the permanent retirement of the Space Shuttle fleet in 2010. Even greater challenges face NASA with the implementation of the Vision for Space Exploration that will introduce extended missions to the Moon beginning in the period of 2015 - 2020 and ultimately see human missions to more distant destinations such as Mars. The experience and capabilities acquired through meeting the maintenance challenges of ISS will serve as the foundation for the maintenance strategy that will be employed in support of these future missions. 2. Rapid Monitoring of Bacteria and Fungi aboard the International Space Station (ISS) Science.gov (United States) Gunter, D.; Flores, G.; Effinger, M.; Maule, J.; Wainwright, N.; Steele, A.; Damon, M.; Wells, M.; Williams, S.; Morris, H.; 2009-01-01 Microorganisms within spacecraft have traditionally been monitored with culture-based techniques. These techniques involve growth of environmental samples (cabin water, air or surfaces) on agar-type media for several days, followed by visualization of resulting colonies or return of samples to Earth for ground-based analysis. Data obtained over the past 4 decades have enhanced our understanding of the microbial ecology within space stations. However, the approach has been limited by the following factors: i) Many microorganisms (estimated > 95%) in the environment cannot grow on conventional growth media; ii) Significant time lags (3-5 days for incubation and up to several months to return samples to ground); iii) Condensation in contact slides hinders colony counting by crew; and iv) Growth of potentially harmful microorganisms, which must then be disposed of safely. This report describes the operation of a new culture-independent technique onboard the ISS for rapid analysis (within minutes) of endotoxin and beta-1, 3-glucan, found in the cell walls of gramnegative bacteria and fungi, respectively. The technique involves analysis of environmental samples with the Limulus Amebocyte Lysate (LAL) assay in a handheld device, known as the Lab-On-a-Chip Application Development Portable Test System (LOCAD-PTS). LOCADPTS was launched to the ISS in December 2006, and here we present data obtained from Mach 2007 until the present day. These data include a comparative study between LOCADPTS analysis and existing culture-based methods; and an exploratory survey of surface endotoxin and beta-1, 3-glucan throughout the ISS. While a general correlation between LOCAD-PTS and traditional culture-based methods should not be expected, we will suggest new requirements for microbial monitoring based upon culture-independent parameters measured by LOCAD-PTS. 3. Science.gov (United States) Wallace William T.; Limero, Thomas F.; Loh, Leslie J.; Mudgett, Paul D.; Gazda, Daniel B. 2017-01-01 During the early years of human spaceflight, short duration missions allowed for monitoring of the spacecraft environment to be performed via archival sampling, in which samples were returned to Earth for analysis. With the construction of the International Space Station (ISS) and the accompanying extended mission durations, the need for enhanced, real-time monitors became apparent. The Volatile Organic Analyzer (VOA) operated on ISS for 7 years, where it assessed trace volatile organic compounds in the cabin air. The large and fixed-position VOA was eventually replaced with the smaller Air Quality Monitor (AQM). Since March 2013, the atmosphere of the U.S. Operating Segment (USOS) has been monitored in near real-time by a pair of AQMs. These devices consist of a gas chromatograph (GC) coupled with a differential mobility spectrometer (DMS) and currently target detection list of 22 compounds. These targets are of importance to both crew health and the Environmental Control and Life Support Systems (ECLSS) on ISS. Data is collected autonomously every 73 hours, though the units can be controlled remotely from mission control to collect data more frequently during contingency or troubleshooting operations. Due to a nominal three-year lifetime on-orbit, the initial units were replaced in February 2016. This paper will focus on the preparation and use of the AQMs over the past several years. A description of the technical aspects of the AQM will be followed by lessons learned from the deployment and operation of the first set of AQMs. These lessons were used to improve the already-excellent performance of the instruments prior to deployment of the replacement units. Data trending over the past several years of operation on ISS will also be discussed, including data obtained during a survey of the USOS modules. Finally, a description of AQM use for contingency and investigative studies will be presented. 4. Estimating spares requirements for Space Station Freedom using the M-SPARE model Science.gov (United States) Kline, Robert C.; Sherbrooke, Craig C. 1992-08-01 The Logistics Management Institute developed a methodology that estimates the optimal orbital replaceable unit (ORU) spares inventory for NASA's Space Station Freedom. NASA is using this methodology to select a spares inventory that will maximize station availability, i.e., the probability that no critical system is inoperative for lack of an ORU spare over the resupply cycle. It is based upon a marginal analysis approach. Spares are ranked in order of decreasing benefit per cost (the improvement provided to station availability per dollar) and added, in that order, to the inventory until a target resource expenditure or availability is reached. The methodology also develops optimal spares inventories constrained by the spares weight the shuttle can carry, the spares volume the station can store, or a combination of resources. To implement our methodology, we developed the Multiple Spares Prioritization and Availability to Resource Evaluation (M-SPARE) model that operates on a personal computer. M-SPARE presents the maximum availability for an entire range of resource expenditures. The model also converts annual spares requirements over any period of the station's life into funding estimates for the next 9 years. In this guide, we describe the M-SPARE methodology, operation, and analytical capabilities. 5. Space Station Habitability Recommendations Based on a Systematic Comparative Analysis of Analogous Conditions Science.gov (United States) Stuster, Jack W. 1986-01-01 Conditions analogous to the proposed NASA Space Station are systematically analyzed in order to extrapolate design guidelines and recommendations concerning habitability and crew productivity. Analogous environments studied included Skylab, Sealab, Tektite, submarines, Antarctic stations and oil drilling platforms, among others. These analogues were compared and rated for size and composition of group, social organization, preparedness for mission, duration of tour, types of tasks, physical and psychological isolation, personal motivation, perceived risk, and quality of habitat and life support conditions. One-hundred design recommendations concerning, sleep, clothing, exercise, medical support, personal hygiene, food preparation, group interaction, habitat aesthetics, outside communications, recreational opportunities, privacy and personal space, waste disposal, onboard training, simulation and task preparation, and behavioral and physiological requirements associated with a microgravity environment, are provided. 6. System requirements and design features of Space Station Remote Manipulator System mechanisms Science.gov (United States) Kumar, Rajnish; Hayes, Robert 1991-01-01 The Space Station Remote Manipulator System (SSRMS) is a long robotic arm for handling large objects/payloads on the International Space Station Freedom. The mechanical components of the SSRMS include seven joints, two latching end effectors (LEEs), and two boom assemblies. The joints and LEEs are complex aerospace mechanisms. The system requirements and design features of these mechanisms are presented. All seven joints of the SSRMS have identical functional performance. The two LEES are identical. This feature allows either end of the SSRMS to be used as tip or base. As compared to the end effector of the Shuttle Remote Manipulator System, the LEE has a latch and umbilical mechanism in addition to the snare and rigidize mechanisms. The latches increase the interface preload and allow large payloads (up to 116,000 Kg) to be handled. The umbilical connectors provide power, data, and video signal transfer capability to/from the SSRMS. 7. Space Station Freedom assembly and operation at a 51.6 degree inclination orbit Science.gov (United States) Troutman, Patrick A.; Brewer, Laura M.; Heck, Michael L.; Kumar, Renjith R. 1993-01-01 This study examines the implications of assembling and operating Space Station Freedom at a 51.6 degree inclination orbit utilizing an enhanced lift Space Shuttle. Freedom assembly is currently baselined at a 220 nautical mile high, 28.5 degree inclination orbit. Some of the reasons for increasing the orbital inclination are (1) increased ground coverage for Earth observations, (2) greater accessibility from Russian and other international launch sites, and (3) increased number of Assured Crew Return Vehicle (ACRV) landing sites. Previous studies have looked at assembling Freedom at a higher inclination using both medium and heavy lift expendable launch vehicles (such as Shuttle-C and Energia). The study assumes that the shuttle is used exclusively for delivering the station to orbit and that it can gain additional payload capability from design changes such as a lighter external tank that somewhat offsets the performance decrease that occurs when the shuttle is launched to a 51.6 degree inclination orbit. 8. Stress Corrosion Evaluation of Nitinol 60 for the International Space Station Water Recycling System Science.gov (United States) Torres, P. D. 2016-01-01 A stress corrosion cracking (SCC) evaluation of Nitinol 60 was performed because this alloy is considered a candidate bearing material for the Environmental Control and Life Support System (ECLSS), specifically in the Urine Processing Assembly of the International Space Station. An SCC evaluation that preceded this one during the 2013-2014 timeframe included various alloys: Inconel 625, Hastelloy C-276, titanium (Ti) commercially pure (CP), Ti 6Al-4V, extra-low interstitial (ELI) Ti 6Al-4V, and Cronidur 30. In that evaluation, most specimens were exposed for a year. The results of that evaluation were published in NASA/TM-2015-218206, entitled "Stress Corrosion Evaluation of Various Metallic Materials for the International Space Station Water Recycling System,"1 available at the NASA Scientific and Technical Information program web page: http://www.sti.nasa.gov. Nitinol 60 was added to the test program in 2014. 9. Summary of Current and Future MSFC International Space Station Environmental Control and Life Support System Activities Science.gov (United States) Ray, Charles D.; Carrasquillo, Robyn L.; Minton-Summers, Silvia 1997-01-01 This paper provides a summary of current work accomplished under technical task agreement (TTA) by the Marshall Space Flight Center (MSFC) regarding the Environmental Control and Life Support System (ECLSS) as well as future planning activities in support of the International Space Station (ISS). Current activities include ECLSS computer model development, component design and development, subsystem integrated system testing, life testing, and government furnished equipment delivered to the ISS program. A long range plan for the MSFC ECLSS test facility is described whereby the current facility would be upgraded to support integrated station ECLSS operations. ECLSS technology development efforts proposed to be performed under the Advanced Engineering Technology Development (AETD) program are also discussed. 10. Integrated failure detection and management for the Space Station Freedom external active thermal control system Science.gov (United States) Mesloh, Nick; Hill, Tim; Kosyk, Kathy 1993-01-01 This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel. 11. Feasibility Study of Data Receiving Station in Korea For CSA UV Space Telescope Project Directory of Open Access Journals (Sweden) Myung-Kook Jee 1998-06-01 Full Text Available We present a feasibility study of a data receiving station in Korea to be used for a 50 cm UV space telescope proposed by CSA. The feasibility was investigated by examining the spacecraft visibility from four different cities in Korea, based on the orbital characteristics of the proposed spacecraft, i.e. inclination of 28.5 deg and circular orbit altitude of 690km. The satellite can be accessed from Korea about 4 times a day, each pass having the duration of 6 to 9 minutes depending on the elevation mask and the latitude of each site. Provided that the X-Band signal can be retrieved from 10 deg elevation, this study demonstrates that a ground station placed in any of the four cities can be used for a reasonable backup downlink of the science data gathered by the proposed UV space telescope. 12. Space station needs, attributes and architectural options study. Briefing material: Final review and executive summary Science.gov (United States) 1983-01-01 Advantages and disadvantages were assessed for configuration options for a modular 14' diameter space station, a modular aft cargo carrier and a shuttle derived vehicle. Early, intermediate, and mature configurations were defined as well as power requirements, heat rejection, hydrazine usage, and the external scavenging concept. Subsystems were analyzed for propulsion, attitude control, data processing, and communications. Areas of uncertainties, associated costs and benefits, and the cost by phase of the modular and shuttle derived vehicle configurations were identified. Technologies assessed included solar vs nuclear; gravity gradient vs active control; heat pipe radiators vs fluid loops; distributed processors vs centralized, and modular vs shuttle derived configuration. It was determined that the early space station architecture should include: (1) reusable OTV with aerobraking; (2) TMS with telepresence services; (3) OTV/TMS refueling and servicing capability; and (4) attached research laboratories for life sciences and materials processing. Science.gov (United States) 1985-01-01 The current Space Station Systems Technology Study add on task was an outgrowth of the Advanced Platform Systems Technology Study (APSTS) that was completed in April 1983 and the subsequent Space Station System Technology Study completed in April 1984. The first APSTS proceeded from the identification of 106 technology topics to the selection of five for detailed trade studies. During the advanced platform study, the technical issues and options were evaluated through detailed trade processes, individual consideration was given to costs and benefits for the technologies identified for advancement, and advancement plans were developed. An approach similar to that was used in the subsequent study, with emphasis on system definition in four specific technology areas to facilitate a more in depth analysis of technology issues. 14. A study of some features of ac and dc electric power systems for a space station Science.gov (United States) Hanania, J. I. 1983-01-01 This study analyzes certain selected topics in rival dc and high frequency ac electric power systems for a Space Station. The interaction between the Space Station and the plasma environment is analyzed, leading to a limit on the voltage for the solar array and a potential problem with resonance coupling at high frequencies. Certain problems are pointed out in the concept of a rotary transformer, and further development work is indicated in connection with dc circuit switching, special design of a transmission conductor for the ac system, and electric motors. The question of electric shock hazards, particularly at high frequency, is also explored. and a problem with reduced skin resistance and therefore increased hazard with high frequency ac is pointed out. The study concludes with a comparison of the main advantages and disadvantages of the two rival systems, and it is suggested that the choice between the two should be made after further studies and development work are completed. 15. Conceptual Kinematic Design and Performance Evaluation of a Chameleon-Like Service Robot for Space Stations Directory of Open Access Journals (Sweden) Marco Ceccarelli 2015-03-01 Full Text Available In this paper a conceptual kinematic design of a chameleon-like robot with proper mobility capacity is presented for service applications in space stations as result of design considerations with biomimetic inspiration by looking at chameleons. Requirements and characteristics are discussed with the aim to identify design problems and operation features. A study of feasibility is described through performance evaluation by using simulations for a basic operation characterization. 16. JSC flight experiment recommendation in support of Space Station robotic operations Science.gov (United States) Berka, Reginald B. 1993-01-01 The man-tended configuration (MTC) of Space Station Freedom (SSF) provides a unique opportunity to move robotic systems from the laboratory into the mainstream space program. Restricted crew access due to the Shuttle's flight rate, as well as constrained on-orbit stay time, reduces the productivity of a facility dependent on astronauts to perform useful work. A natural tendency toward robotics to perform maintenance and routine tasks will be seen in efforts to increase SSF usefulness. This tendency will provide the foothold for deploying space robots. This paper outlines a flight experiment that will capitalize on the investment in robotic technology made by NASA over the past ten years. The flight experiment described herein provides the technology demonstration necessary for taking advantage of the expected opportunity at MTC. As a context to this flight experiment, a broader view of the strategy developed at the JSC is required. The JSC is building toward MTC by developing a ground-based SSF emulation funded jointly by internal funds, NASA/Code R, and NASA/Code M. The purpose of this ground-based Station is to provide a platform whereby technology originally developed at JPL, LaRC, and GSFC can be integrated into a near flight-like condition. For instance, the Automated Robotic Maintenance of Space Station (ARMSS) project integrates flat targets, surface inspection, and other JPL technologies into a Station analogy for evaluation. Also, ARMSS provides the experimental platform for the Capaciflector from GSPC to be evaluated for its usefulness in performing ORU change out or other tasks where proximity detection is required. The use and enhancement of these ground-based SSF models are planned for use through FY-93. The experimental data gathered from tests in these facilities will provide the basis for the technology content of the proposed flight experiment. 17. Nuclei Measurements with the Alpha Magnetic Spectrometer on the International Space Station Directory of Open Access Journals (Sweden) Heil Melanie 2017-01-01 Full Text Available The exact behavior of nuclei fluxes in cosmic rays and how they relate to each other is important for understanding the production, acceleration and propagation mechanisms of charged cosmic rays. Precise measurements with the Alpha Magnetic Spectrometer on the International Space Station of light nuclei fluxes and their ratios in primary cosmic rays with rigidities from GV to TV are presented. The high statistics of the measurements require detailed studies and in depth understanding of associated systematic uncertainties. 18. Space station data system analysis/architecture study. Task 4: System definition report Science.gov (United States) 1985-01-01 Functional/performance requirements for the Space Station Data System (SSDS) are analyzed and architectural design concepts are derived and evaluated in terms of their performance and growth potential, technical feasibility and risk, and cost effectiveness. The design concepts discussed are grouped under five major areas: SSDS top-level architecture overview, end-to-end SSDS design and operations perspective, communications assumptions and traffic analysis, onboard SSDS definition, and ground SSDS definition. 19. The Canadian space program from Black Brant to the International Space Station CERN Document Server Godefroy, Andrew B 2017-01-01 Canada’s space efforts from its origins towards the end of the Second World War through to its participation in the ISS today are revealed in full in this complete and carefully researched history. Employing recently declassified archives and many never previously used sources, author Andrew B. Godefroy explains the history of the program through its policy and many fascinating projects. He assesses its effectiveness as a major partner in both US and international space programs, examines its current national priorities and capabilities, and outlines the country’s plans for the future. Despite being the third nation to launch a satellite into space after the Soviet Union and the United States; being a major partner in the US space shuttle program with the iconic Canadarm; being an international leader in the development of space robotics; and acting as one of the five major partners in the ISS, the Canadian Space Program remains one of the least well-known national efforts of the space age. This book atte... 20. User needs, benefits, and integration of robotic systems in a space station laboratory Science.gov (United States) Dodd, W. R.; Badgley, M. B.; Konkel, C. R. 1989-01-01 The methodology, results and conclusions of all tasks of the User Needs, Benefits, and Integration Study (UNBIS) of Robotic Systems in a Space Station Laboratory are summarized. Study goals included the determination of user requirements for robotics within the Space Station, United States Laboratory. In Task 1, three experiments were selected to determine user needs and to allow detailed investigation of microgravity requirements. In Task 2, a NASTRAN analysis of Space Station response to robotic disturbances, and acceleration measurement of a standard industrial robot (Intelledex Model 660) resulted in selection of two ranges of microgravity manipulation: Level 1 (10-3 to 10-5 G at greater than 1 Hz) and Level 2 (less than equal 10-6 G at 0.1 Hz). This task included an evaluation of microstepping methods for controlling stepper motors and concluded that an industrial robot actuator can perform milli-G motion without modification. Relative merits of end-effectors and manipulators were studied in Task 3 in order to determine their ability to perform a range of tasks related to the three microgravity experiments. An Effectivity Rating was established for evaluating these robotic system capabilities. Preliminary interface requirements for an orbital flight demonstration were determined in Task 4. Task 5 assessed the impact of robotics. 1. Implications of privacy needs and interpersonal distancing mechanisms for space station design Science.gov (United States) Harrison, Albert A.; Sommer, Robert; Struthers, Nancy; Hoyt, Kathleen 1988-01-01 Isolation, confinement, and the characteristics of microgravity will accentuate the need for privacy in the proposed NASA space station, yet limit the mechanism available for achieving it. This study proposes a quantitative model for understanding privacy, interpersonal distancing, and performance, and discusses the practical implications for Space Station design. A review of the relevant literature provided the basis for a database, definitions of physical and psychological distancing, loneliness, and crowding, and a quantitative model of situational privacy. The model defines situational privacy (the match between environment and task), and focuses on interpersonal contact along visual, auditory, olfactory, and tactile dimensions. It involves summing across pairs of crew members, contact dimensions, and time, yet also permits separate analyses of subsets of crew members and contact dimensions. The study concludes that performance will benefit when the type and level of contact afforded by the environment align with that required by the task. The key to achieving this is to design a flexible, definable, and redefinable interior environment that provides occupants with a wide array of options to meet their needs for solitude, limited social interaction, and open group activity. The report presents 49 recommendations in five categories to promote a wide range of privacy options despite the space station's volumetric limitations. 2. Virtual workstations and telepresence interfaces: Design accommodations and prototypes for Space Station Freedom evolution Science.gov (United States) Mcgreevy, Michael W. 1990-01-01 An advanced human-system interface is being developed for evolutionary Space Station Freedom as part of the NASA Office of Space Station (OSS) Advanced Development Program. The human-system interface is based on body-pointed display and control devices. The project will identify and document the design accommodations ('hooks and scars') required to support virtual workstations and telepresence interfaces, and prototype interface systems will be built, evaluated, and refined. The project is a joint enterprise of Marquette University, Astronautics Corporation of America (ACA), and NASA's ARC. The project team is working with NASA's JSC and McDonnell Douglas Astronautics Company (the Work Package contractor) to ensure that the project is consistent with space station user requirements and program constraints. Documentation describing design accommodations and tradeoffs will be provided to OSS, JSC, and McDonnell Douglas, and prototype interface devices will be delivered to ARC and JSC. ACA intends to commercialize derivatives of the interface for use with computer systems developed for scientific visualization and system simulation. 3. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 4: Conceptual design report Science.gov (United States) 1989-01-01 The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at Marshall Space Flight Center (MSFC). The PTC will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. In the first step of this task, a methodology was developed to ensure that all relevant design dimensions were addressed, and that all feasible designs could be considered. The development effort yielded the following method for generating and comparing designs in task 4: (1) Extract SCS system requirements (functions) from the system specification; (2) Develop design evaluation criteria; (3) Identify system architectural dimensions relevant to SCS system designs; (4) Develop conceptual designs based on the system requirements and architectural dimensions identified in step 1 and step 3 above; (5) Evaluate the designs with respect to the design evaluation criteria developed in step 2 above. The results of the method detailed in the above 5 steps are discussed. The results of the task 4 work provide the set of designs which two or three candidate designs are to be selected by MSFC as input to task 5-refine SCS conceptual designs. The designs selected for refinement will be developed to a lower level of detail, and further analyses will be done to begin to determine the size and speed of the components required to implement these designs. 4. Tethered elevator and platforms as space station facilities: Systems studies and demonstrative experiments Science.gov (United States) 1986-01-01 Several key concepts of the science and applications tethered platforms were studied. Some conclusions reached are herein listed. Tether elevator and platform could improve the space station scientific and applicative capabilities. The space elevator presents unique characteristics as microgravity facility and as a tethered platform servicing vehicle. Pointing platforms could represent a new kind of observation facility for large class of payloads. The dynamical, control and technological complexity of these concepts advised demonstrative experiments. The on-going tethered satellite system offers the opportunity to perform such experiments. And feasibility studies are in progress. 5. A Power Distribution System for the AMS experiment on the International Space Station Energy Technology Data Exchange (ETDEWEB) Kim, K S [Department of Physics, EWHA Womans University, Seoul, Korea (Korea, Republic of); Capell, M [Massachusetts Institute of Technology, MIT, Cambridge, MA 02139 (United States); Lebedev, A [Massachusetts Institute of Technology, MIT, Cambridge, MA 02139 (United States); Viertel, G M [ETH-Zuerich, Labor fuer Hochenergiephysik, Zurich (Switzerland); Yang, J [Department of Physics, EWHA Womans University, Seoul (Korea, Republic of) 2006-12-15 The Alpha Magnetic Spectrometer (AMS) experiment on the International Space Station (ISS) requires a fully redundant, highly efficient, space qualified power distribution system. The device receives up to 2.8 kW of electrical power from the ISS and distributes it to the various subsystems of the experiment. The majority of these subsystems require the power to be converted from the ISS delivered nominal voltage level of 120 VDC to 28 VDC. The entire system and the individual output channels will be monitored and controlled from the Control Centers on the ground. 6. The computer-communication link for the innovative use of Space Station Science.gov (United States) Carroll, C. C. 1984-01-01 The potential capability of the computer-communications system link of space station is related to innovative utilization for industrial applications. Conceptual computer network architectures are presented and their respective accommodation of innovative industrial projects are discussed. To achieve maximum system availability for industrialization is a possible design goal, which would place the industrial community in an interactive mode with facilities in space. A worthy design goal would be to minimize the computer-communication management function and thereby optimize the system availability for industrial users. Quasi-autonomous modes and subnetworks are key design issues, since they would be the system elements directly effecting the system performance for industrial use. 7. Tissue equivalent detector measurements on Mir space station. Comparison with other data Energy Technology Data Exchange (ETDEWEB) Bottollier-Depois, J.F. [CEA Centre dEtudes de Fontenay-aux-Roses, 92 (France). Dept. de Protection de la Sante de lHomme et de Dosimetrie; Siegrist, M. [Centre National d`Etudes Spatiales (CNES), 31 - Toulouse (France); Duvivier, E.; Almarcha, B. [STEEL Technologies, Mazeres sur Salat (France); Dachev, T.P.; Semkova, J.V. [Bulgarian Academy of Sciences, Sofia (Bulgaria). Central Lab. of Solar Energy and New Energy Sources; Petrov, V.M.; Bengin, V.; Koslova, S.B. [Institute of Biomedical Problems, Moscow (Russian Federation) 1995-12-31 The measurement of the dose received by the cosmonauts, due to cosmic radiations, during a space mission is an important parameter to estimate the radiological risk. Tissue equivalent measurements of radiation environment inside the MIR space station were performed continuously since July 1992. Interesting results about radiation measurements show (a) the South Atlantic Anomaly (SAA) crossing, (c) the increase of radiation near the poles and (d) the effects of solar eruptions. These data are compared with solid state detector (SSD) and other tissue equivalent proportional counter (TEPC) results. (authors). 4 refs., 7 figs. 8. NASA uses Eclipse RCP Applications for Experiments on the International Space Station Science.gov (United States) Cohen, Tamar 2013-01-01 Eclipse is going to space for the first time in 2013! The International Space Station (ISS) is used as a site for experiments any software developed as part of these experiments has to comply with extensive and strict user interface guidelines. NASA Ames Research Center's Intelligent Robotics Group is doing 2 sets of experiments, both with astronauts using Eclipse RCP applications to remotely control robots. One experiment will control SPHERES with an Android Smartphone on the ISS the other experiment will control a K10 rover on Earth. 9. Robotic experiment with a force reflecting handcontroller onboard MIR space station Science.gov (United States) Delpech, M.; Matzakis, Y. 1994-01-01 During the French CASSIOPEE mission that will fly onboard MIR space station in 1996, ergonomic evaluations of a force reflecting handcontroller will be performed on a simulated robotic task. This handcontroller is a part of the COGNILAB payload that will be used also for experiments in neurophysiology. The purpose of the robotic experiment is the validation of a new control and design concept that would enhance the task performances for telemanipulating space robots. Besides the handcontroller and its control unit, the experimental system includes a simulator of the slave robot dynamics for both free and constrained motions, a flat display screen and a seat with special fixtures for holding the astronaut. 10. A Systems Analysis of Emergency Escape and Recovery Systems for the U.S. Space Station. Science.gov (United States) 1986-12-01 serviced by expendable Soyuz transportation capsules (35:4). The space station will not have a continuous stand by space shuttle (36:1). During full...a recovery capsule . Covering the recovery * capsule is the forebody re-entry heat shield. Escaping crew members enter MOSES through an air lock...pyrotechnic devices separates the capsule from the forebody. After a short period of 35 I ilt 4.0 00 ’- L..X ct a. UJ WO ~ a- <:R 5 4-4-4 II " ir - 0 a- ELI 11. Amateur Radio on the International Space Station - the First Operational Payload on the ISS Science.gov (United States) Bauer, F. H.; McFadin, L.; Steiner, M.; Conley, C. L. 2002-01-01 12. Assessment of Utilization of Food Variety on the International Space Station Science.gov (United States) Cooper, M. R.; Paradis, R.; Zwart, S. R.; Smith, S. M.; Kloeris, V. L.; Douglas, G. L. 2018-01-01 Long duration missions will require astronauts to subsist on a closed food system for at least three years. Resupply will not be an option, and the food supply will be older at the time of consumption and more static in variety than previous missions. The space food variety requirements that will both supply nutrition and support continued interest in adequate consumption for a mission of this duration is unknown. Limited food variety of past space programs (Gemini, Apollo, International Space Station) as well as in military operations resulted in monotony, food aversion, and weight loss despite relatively short mission durations of a few days up to several months. In this study, food consumption data from 10 crew members on 3-6-month International Space Station missions was assessed to determine what percentage of the existing food variety was used by crew members, if the food choices correlated to the amount of time in orbit, and whether commonalities in food selections existed across crew members. Complete mission diet logs were recorded on ISS flights from 2008 - 2014, a period in which space food menu variety was consistent, but the food system underwent an extensive reformulation to reduce sodium content. Food consumption data was correlated to the Food on Orbit by Week logs, archived Data Usage Charts, and a food list categorization table using TRIFACTA software and queries in a SQL SERVER 2012 database. 13. Optical ground station site diversity for Deep Space Optical Communications the Mars Telecom Orbiter optical link Science.gov (United States) Wilson, K.; Parvin, B.; Fugate, R.; Kervin, P.; Zingales, S. 2003-01-01 Future NASA deep space missions will fly advanced high resolution imaging instruments that will require high bandwidth links to return the huge data volumes generated by these instruments. Optical communications is a key technology for returning these large data volumes from deep space probes. Yet to cost effectively realize the high bandwidth potential of the optical link will require deployment of ground receivers in diverse locations to provide high link availability. A recent analysis of GOES weather satellite data showed that a network of ground stations located in Hawaii and the Southwest continental US can provide an average of 90% availability for the deep space optical link. JPL and AFRL are exploring the use of large telescopes in Hawaii, California, and Albuquerque to support the Mars Telesat laser communications demonstration. Designed to demonstrate multi-Mbps communications from Mars, the mission will investigate key operational strategies of future deep space optical communications network. 14. Bacteria, some permanent tenants Space Station; Bacteria, unos inquilinos permanentes de la estacion espacial Energy Technology Data Exchange (ETDEWEB) Diaz, B. 2015-07-01 Vacuum cleaners to operate the vacuum or rags with ethanol they are the products of cleaning of the astronauts. Is there tight spaces fully sterilized? It seems not, even in the Space Station International (ISS). When it comes to bacteria, they are able to travel more than 400 kilometers housed in costumes, bodies and interior of the astronauts themselves and settle in a enclosed space where-unlike in a {sup c}leanroom 'terrestre- the air is not recycled. A NASA study has found an abundance of bacteria 'opportunists' which, although harmless on Earth, they might derivasen cause infections in inflammations or skin irritations. Not forgetting those fungi that could damage or affect the infrastructure equipment space. (Author) 15. Lessons Learned in Robotic Support of the Maintenance and Repair of the International Space Station Science.gov (United States) Dyer, J.; Lucier, L. With the completion of International Space Station (ISS) assembly, planning and daily activities are transitioning from construction to maintenance and repair. Previously, the ISS programme has relied heavily on the Space Shuttle programme to aid these tasks due to its ability to deliver equipment on relatively short notice together with crewmembers recently trained on execution of specific tasks. In 2007, a study was performed to identify and develop preliminary removal and replacement (R&R) timelines for those Orbital Replaceable Units (ORUs) that would cause a loss of redundancy to ISS power or life support capability [1]. This study identified 14 such ORUs and the team was challenged to demonstrate that the various Extra Vehicular Activity (EVA) tasks required to repair or replace these units could be performed without the support of the Space Station Remote Manipulator System (SSRMS or Canadarm2) due to potential lack of redundancy in the robotic system due to either the failed ORU or safe-guards put in place for its R&R. This philosophy was recently challenged upon failure of an external cooling loop pump, the repair of which relied heavily on Canadarm2 support. This paper will discuss the challenges associated with the planning and execution of R&R tasks onboard the ISS post Space Shuttle retirement, focusing on robotics support and using the 2010 failure of an external cooling loop pump module as context. A timeline of events from failure to recovery will be laid out, highlighting lessons learned. Specific challenges associated with this activity included developing products to complete three EVAs in two weeks, preparing crewmembers to perform a specific task for which they had not been trained, and the late decision to rely heavily on use of Canadarm2 for its completion. Attention will also be given to the change in philosophy regarding use of the SSRMS as it applies to the continued maintenance and repair of the International Space Station. 16. Robotic assembly and maintenance of future space stations based on the ISS mission operations experience Science.gov (United States) Rembala, Richard; Ower, Cameron 2009-10-01 MDA has provided 25 years of real-time engineering support to Shuttle (Canadarm) and ISS (Canadarm2) robotic operations beginning with the second shuttle flight STS-2 in 1981. In this capacity, our engineering support teams have become familiar with the evolution of mission planning and flight support practices for robotic assembly and support operations at mission control. This paper presents observations on existing practices and ideas to achieve reduced operational overhead to present programs. It also identifies areas where robotic assembly and maintenance of future space stations and space-based facilities could be accomplished more effectively and efficiently. Specifically, our experience shows that past and current space Shuttle and ISS assembly and maintenance operations have used the approach of extensive preflight mission planning and training to prepare the flight crews for the entire mission. This has been driven by the overall communication latency between the earth and remote location of the space station/vehicle as well as the lack of consistent robotic and interface standards. While the early Shuttle and ISS architectures included robotics, their eventual benefits on the overall assembly and maintenance operations could have been greater through incorporating them as a major design driver from the beginning of the system design. Lessons learned from the ISS highlight the potential benefits of real-time health monitoring systems, consistent standards for robotic interfaces and procedures and automated script-driven ground control in future space station assembly and logistics architectures. In addition, advances in computer vision systems and remote operation, supervised autonomous command and control systems offer the potential to adjust the balance between assembly and maintenance tasks performed using extra vehicular activity (EVA), extra vehicular robotics (EVR) and EVR controlled from the ground, offloading the EVA astronaut and even the robotic 17. The return of "Gasoline station-park" status into green-open space in DKI Jakarta Province Science.gov (United States) Kautsar, L. H. R.; Waryono, T.; Sobirin 2017-07-01 The development of gasoline stations in 1970 increased drastically due to the Government support through DKT Jaya Official Note (DKT Jakarta), resulting in a great number of the parks (green open space or RTH - Ruang Terbuka Hijau) converted into a gasoline station. Currently, to meet the RTH target (13.94 % RTH based RTRW [(Rencana Tata Ruang Wilayah) DKT Jakarta 2010], the policy was changed by Decree No.728 year 2009 and Governor Tnstruction No.75 year 2009. Land function of 27 gasoline stations unit must be returned. This study is to determine the appropriateness of gasoline Station-Park conversion into RTH based site and situation approach. The scope of this study was limited only to gasoline stations not converted into RTH. The methodology was the combination of AHP (Analytical Hierarchy Process) and ranking method. Site variables were meant for prone to flooding, the width of land for gasoline station, land status. Situation variables were meant for other public space, availability of other gasoline stations, gasoline stations service, road segments, and the proportions of built space. Analysis study used quantitative descriptive analysis. The results were three of the five gasoline stations were congruence to be converted into a green open space (RTH). 18. Wetlab-2 - Quantitative PCR Tools for Spaceflight Studies of Gene Expression Aboard the International Space Station Science.gov (United States) Schonfeld, Julie E. 2015-01-01 Wetlab-2 is a research platform for conducting real-time quantitative gene expression analysis aboard the International Space Station. The system enables spaceflight genomic studies involving a wide variety of biospecimen types in the unique microgravity environment of space. Currently, gene expression analyses of space flown biospecimens must be conducted post flight after living cultures or frozen or chemically fixed samples are returned to Earth from the space station. Post-flight analysis is limited for several reasons. First, changes in gene expression can be transient, changing over a timescale of minutes. The delay between sampling on Earth can range from days to months, and RNA may degrade during this period of time, even in fixed or frozen samples. Second, living organisms that return to Earth may quickly re-adapt to terrestrial conditions. Third, forces exerted on samples during reentry and return to Earth may affect results. Lastly, follow up experiments designed in response to post-flight results must wait for a new flight opportunity to be tested. 19. Exposure of Polymer Film Thermal Control Materials on the Materials International Space Station Experiment (MISSE) Science.gov (United States) Dever, Joyce; Miller, Sharon; Messer, Russell; Sechkar, Edward; Tollis, Greg 2002-01-01 Seventy-nine samples of polymer film thermal control (PFTC) materials have been provided by the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) for exposure to the low Earth orbit environment on the exterior of the International Space Station (ISS) as part of the Materials International Space Station Experiment (MISSE). MISSE is a materials flight experiment sponsored by the Air Force Research Lab/Materials Lab and NASA. This paper will describe background, objectives, and configurations for the GRC PFTC samples for MISSE. These samples include polyimides, fluorinated polyimides, and Teflon fluorinated ethylene propylene (FEP) with and without second-surface metallizing layers and/or surface coatings. Also included are polyphenylene benzobisoxazole (PBO) and a polyarylene ether benzimidazole (TOR-LM). On August 16, 2001, astronauts installed passive experiment carriers (PECs) on the exterior of the ISS in which were located twenty-eight of the GRC PFTC samples for 1-year space exposure. MISSE PECs for 3-year exposure, which will contain fifty-one GRC PFTC samples, will be installed on the ISS at a later date. Once returned from the ISS, MISSE GRC PFTC samples will be examined for changes in optical and mechanical properties and atomic oxygen (AO) erosion. Additional sapphire witness samples located on the AO exposed trays will be examined for deposition of contaminants. 20. Histological and Transcriptomic Analysis of Adult Japanese Medaka Sampled Onboard the International Space Station. Directory of Open Access Journals (Sweden) Yasuhiko Murata Full Text Available To understand how humans adapt to the space environment, many experiments can be conducted on astronauts as they work aboard the Space Shuttle or the International Space Station (ISS. We also need animal experiments that can apply to human models and help prevent or solve the health issues we face in space travel. The Japanese medaka (Oryzias latipes is a suitable model fish for studying space adaptation as evidenced by adults of the species having mated successfully in space during 15 days of flight during the second International Microgravity Laboratory mission in 1994. The eggs laid by the fish developed normally and hatched as juveniles in space. In 2012, another space experiment ("Medaka Osteoclast" was conducted. Six-week-old male and female Japanese medaka (Cab strain osteoblast transgenic fish were maintained in the Aquatic Habitat system for two months in the ISS. Fish of the same strain and age were used as the ground controls. Six fish were fixed with paraformaldehyde or kept in RNA stabilization reagent (n = 4 and dissected for tissue sampling after being returned to the ground, so that several principal investigators working on the project could share samples. Histology indicated no significant changes except in the ovary. However, the RNA-seq analysis of 5345 genes from six tissues revealed highly tissue-specific space responsiveness after a two-month stay in the ISS. Similar responsiveness was observed among the brain and eye, ovary and testis, and the liver and intestine. Among these six tissues, the intestine showed the highest space response with 10 genes categorized as oxidation-reduction processes (gene ontogeny term GO:0055114, and the expression levels of choriogenin precursor genes were suppressed in the ovary. Eleven genes including klf9, klf13, odc1, hsp70 and hif3a were upregulated in more than four of the tissues examined, thus suggesting common immunoregulatory and stress responses during space adaptation. 1. Space Station Freedom ECLSS: A step toward autonomous regenerative life support systems Science.gov (United States) Dewberry, Brandon S. 1990-01-01 The Environmental Control and Life Support System (ECLSS) is a Freedom Station distributed system with inherent applicability to extensive automation primarily due to its comparatively long control system latencies. These allow longer contemplation times in which to form a more intelligent control strategy and to prevent and diagnose faults. The regenerative nature of the Space Station Freedom ECLSS will contribute closed loop complexities never before encountered in life support systems. A study to determine ECLSS automation approaches has been completed. The ECLSS baseline software and system processes could be augmented with more advanced fault management and regenerative control systems for a more autonomous evolutionary system, as well as serving as a firm foundation for future regenerative life support systems. Emerging advanced software technology and tools can be successfully applied to fault management, but a fully automated life support system will require research and development of regenerative control systems and models. The baseline Environmental Control and Life Support System utilizes ground tests in development of batch chemical and microbial control processes. Long duration regenerative life support systems will require more active chemical and microbial feedback control systems which, in turn, will require advancements in regenerative life support models and tools. These models can be verified using ground and on orbit life support test and operational data, and used in the engineering analysis of proposed intelligent instrumentation feedback and flexible process control technologies for future autonomous regenerative life support systems, including the evolutionary Space Station Freedom ECLSS. 2. The nutritional status of astronauts is altered after long-term space flight aboard the International Space Station Science.gov (United States) Smith, Scott M.; Zwart, Sara R.; Block, Gladys; Rice, Barbara L.; Davis-Street, Janis E. 2005-01-01 Defining optimal nutrient requirements is critical for ensuring crew health during long-duration space exploration missions. Data pertaining to such nutrient requirements are extremely limited. The primary goal of this study was to better understand nutritional changes that occur during long-duration space flight. We examined body composition, bone metabolism, hematology, general blood chemistry, and blood levels of selected vitamins and minerals in 11 astronauts before and after long-duration (128-195 d) space flight aboard the International Space Station. Dietary intake and limited biochemical measures were assessed during flight. Crew members consumed a mean of 80% of their recommended energy intake, and on landing day their body weight was less (P = 0.051) than before flight. Hematocrit, serum iron, ferritin saturation, and transferrin were decreased and serum ferritin was increased after flight (P superoxide dismutase was less after flight (P < 0.05), indicating increased oxidative damage. Despite vitamin D supplement use during flight, serum 25-hydroxycholecalciferol was decreased after flight (P < 0.01). Bone resorption was increased after flight, as indicated by several markers. Bone formation, assessed by several markers, did not consistently rise 1 d after landing. These data provide evidence that bone loss, compromised vitamin D status, and oxidative damage are among critical nutritional concerns for long-duration space travelers. 3. Space Station Operations Would Extend Until at Least 2024 Under Obama Plan Science.gov (United States) Showstack, Randy 2014-01-01 An 8 January decision by the White House to propose an extension of the International Space Station's (ISS) operation until at least 2024 would allow for increased research on board the floating laboratory, a longer planning horizon for commercial activities, and a continuation of international cooperation in space, administration officials said. The proposal, which has received initial support from some key members of Congress, would be the second extension for ISS under the Obama administration and would accommodate increased research related to long-duration human space flight, Earth science, and other areas. ISS, which in the United States is authorized under the NASA Authorization Act of 2010, was last extended in 2010 and costs about \$3 billion annually to operate. 4. Primary Dendrite Array Morphology: Observations from Ground-based and Space Station Processed Samples Science.gov (United States) Tewari, Surendra; Rajamure, Ravi; Grugel, Richard; Erdmann, Robert; Poirier, David 2012-01-01 Influence of natural convection on primary dendrite array morphology during directional solidification is being investigated under a collaborative European Space Agency-NASA joint research program, "Microstructure Formation in Castings of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST)". Two Aluminum-7 wt pct Silicon alloy samples, MICAST6 and MICAST7, were directionally solidified in microgravity on the International Space Station. Terrestrially grown dendritic monocrystal cylindrical samples were remelted and directionally solidified at 18 K/cm (MICAST6) and 28 K/cm (MICAST7). Directional solidification involved a growth speed step increase (MICAST6-from 5 to 50 micron/s) and a speed decrease (MICAST7-from 20 to 10 micron/s). Distribution and morphology of primary dendrites is currently being characterized in these samples, and also in samples solidified on earth under nominally similar thermal gradients and growth speeds. Primary dendrite spacing and trunk diameter measurements from this investigation will be presented. 5. How Do Lessons Learned on the International Space Station (ISS) Help Plan Life Support for Mars? Science.gov (United States) Jones, Harry W.; Hodgson, Edward W.; Gentry, Gregory J.; Kliss, Mark H. 2016-01-01 How can our experience in developing and operating the International Space Station (ISS) guide the design, development, and operation of life support for the journey to Mars? The Mars deep space Environmental Control and Life Support System (ECLSS) must incorporate the knowledge and experience gained in developing ECLSS for low Earth orbit, but it must also meet the challenging new requirements of operation in deep space where there is no possibility of emergency resupply or quick crew return. The understanding gained by developing ISS flight hardware and successfully supporting a crew in orbit for many years is uniquely instructive. Different requirements for Mars life support suggest that different decisions may be made in design, testing, and operations planning, but the lessons learned developing the ECLSS for ISS provide valuable guidance. 6. Mentoring SFRM: A New Approach to International Space Station Flight Control Training Science.gov (United States) Huning, Therese; Barshi, Immanuel; Schmidt, Lacey 2009-01-01 The Mission Operations Directorate (MOD) of the Johnson Space Center is responsible for providing continuous operations support for the International Space Station (ISS). Operations support requires flight controllers who are skilled in team performance as well as the technical operations of the ISS. Space Flight Resource Management (SFRM), a NASA adapted variant of Crew Resource Management (CRM), is the competency model used in the MOD. ISS flight controller certification has evolved to include a balanced focus on development of SFRM and technical expertise. The latest challenge the MOD faces is how to certify an ISS flight controller (Operator) to a basic level of effectiveness in 1 year. SFRM training uses a twopronged approach to expediting operator certification: 1) imbed SFRM skills training into all Operator technical training and 2) use senior flight controllers as mentors. This paper focuses on how the MOD uses senior flight controllers as mentors to train SFRM skills. 7. Delay/Disruption Tolerant Networking for the International Space Station (ISS) Science.gov (United States) Schlesinger, Adam; Willman, Brett M.; Pitts, Lee; Davidson, Suzanne R.; Pohlchuck, William A. 2017-01-01 Disruption Tolerant Networking (DTN) is an emerging data networking technology designed to abstract the hardware communication layer from the spacecraft/payload computing resources. DTN is specifically designed to operate in environments where link delays and disruptions are common (e.g., space-based networks). The National Aeronautics and Space Administration (NASA) has demonstrated DTN on several missions, such as the Deep Impact Networking (DINET) experiment, the Earth Observing Mission 1 (EO-1) and the Lunar Laser Communication Demonstration (LLCD). To further the maturation of DTN, NASA is implementing DTN protocols on the International Space Station (ISS). This paper explains the architecture of the ISS DTN network, the operational support for the system, the results from integrated ground testing, and the future work for DTN expansion. 8. The International Space Station: A Unique Platform for Remote Sensing of Natural Disasters Science.gov (United States) Stefanov, William L.; Evans, Cynthia A. 2014-01-01 different times of the day and night. This is important for two reasons: 1) certain surface processes (i.e., development of coastal fog banks) occur at times other than local solar noon, making it difficult to collect relevant data from traditional satellite platforms, and 2) it provides opportunities for the ISS to collect data for short-duration events, such as natural disasters, that polar-orbiting satellites may miss due to their orbital dynamics - in essence, the ISS can be "in the right place at the right time" to collect data. An immediate application of ISS remote sensing data collection is that the data can be used to provide information for humanitarian aid after a natural disaster. This activity contributes directly to the station's Benefits to Humanity mission. The International Charter, Space and Major Disasters (also known as the International Disaster Charter, or IDC) is an agreement between agencies of several countries to provide - on a best-effort basis - remotely sensed data related to natural disasters to requesting countries in support of disaster response. In the United States, the lead agency for interaction with the IDC is the United States Geological Survey (USGS); when an IDC request, or activation, is received, the USGS notifies the science teams for NASA instruments with targeting information for data collection. In the case of the ISS, Earth scientists in the JSC ARES Directorate, in association with the ISS Program Science Office, coordinate targeting and data collection with the USGS. If data is collected, it is passed back to the USGS for posting on its Hazards Data Distribution System and made available for download. The ISS was added to the USGS's list of NASA remote sensing assets that could respond to IDC activations in May 2012. Initially, the NASA ISS sensor systems available to respond to IDC activations included the ISS Agricultural Camera (ISSAC), an internal multispectral visible-near infrared wavelength system mounted in the WORF 9. On-Orbit Measurement of Next Generation Space Solar Cell Technology on the International Space Station Science.gov (United States) Wolford, David S.; Myers, Matthew G.; Prokop, Norman F.; Krasowski, Michael J.; Parker, David S.; Cassidy, Justin C.; Davies, William E.; Vorreiter, Janelle O.; Piszczor, Michael F.; McNatt, Jeremiah S. 2015-01-01 Measurement is essential for the evaluation of new photovoltaic (PV) technology for space solar cells. NASA Glenn Research Center (GRC) is in the process of measuring several solar cells in a supplemental experiment on NASA Goddard Space Flight Center's (GSFC) Robotic Refueling Mission's (RRM) Task Board 4 (TB4). Four industry and government partners have provided advanced PV devices for measurement and orbital environment testing. The experiment will be on-orbit for approximately 18 months. It is completely self-contained and will provide its own power and internal data storage. Several new cell technologies including four- junction (4J) Inverted Metamorphic Multijunction (IMM) cells will be evaluated and the results compared to ground-based measurements. 10. Primary Dendrite Arm Spacings in Al-7Si Alloy Directionally Solidified on the International Space Station Science.gov (United States) Angart, Samuel; Lauer, Mark; Poirier, David; Tewari, Surendra; Rajamure, Ravi; Grugel, Richard 2015-01-01 Samples from directionally solidified Al- 7 wt. % Si have been analyzed for primary dendrite arm spacing (lambda) and radial macrosegregation. The alloy was directionally solidified (DS) aboard the ISS to determine the effect of mitigating convection on lambda and macrosegregation. Samples from terrestrial DS-experiments thermal histories are discussed for comparison. In some experiments, lambda was measured in microstructures that developed during the transition from one speed to another. To represent DS in the presence of no convection, the Hunt-Lu model was used to represent diffusion controlled growth under steady-state conditions. By sectioning cross-sections throughout the entire length of a solidified sample, lambda was measured and calculated using the model. During steady-state, there was reasonable agreement between the measured and calculated lambda's in the space-grown samples. In terrestrial samples, the differences between measured and calculated lambda's indicated that the dendritic growth was influenced by convection. 11. Commercial Seed Selection and Effectiveness of Sanitization Methods in Preparation for Plant Growth Experiments on the International Space Station Science.gov (United States) Boehm, Emma 2017-01-01 A closed-loop food production system will be important to gain autonomy on long duration space missions. Crop growth experiments in the Veggie plant chamber aboard the International Space Station (ISS) are helping to identify methods and limitations of food production in space. Prior to flight, seeds are surface sterilized to reduce environmental and crew contamination risks. 12. 14 CFR 1266.102 - Cross-waiver of liability for agreements for activities related to the International Space Station. Science.gov (United States) 2010-01-01 .... (iii) The term “related entity” may also apply to a State, or an agency or institution of a State... for activities related to the International Space Station. 1266.102 Section 1266.102 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CROSS-WAIVER OF LIABILITY § 1266.102 Cross-waiver of... 13. Amateur Radio on the International Space Station: The First Operational Payload on the ISS Science.gov (United States) Bauer, Frank H.; McFadin, Lou; Steiner, Mark D.; Conley, Carolynn L. 2002-01-01 As astronauts and cosmonauts have adapted to life on the International Space Station (ISS), they have found amateur radio and its connection to life on Earth to be a important on-board companion and a substantial psychological boost. Since its first use in November 2000, the first five expedition crews have utilized the amateur radio station in the Functional Cargo Block (also referred to as the FGB or Zarya module) to talk to thousands of students in schools, to their families on Earth, and to amateur radio operators around the world. This paper will discuss the development, qualification, installation and operation of the amateur radio system. It will also discuss some of the challenges that the amateur radio international team of volunteers overcame to bring its first phase of equipment on ISS to fruition. 14. Effects of a Closed Space Environment on Gene Expression in Hair Follicles of Astronauts in the International Space Station. Directory of Open Access Journals (Sweden) Full Text Available Adaptation to the space environment can sometimes pose physiological problems to International Space Station (ISS astronauts after their return to earth. Therefore, it is important to develop healthcare technologies for astronauts. In this study, we examined the feasibility of using hair follicles, a readily obtained sample, to assess gene expression changes in response to spaceflight adaptation. In order to investigate the gene expression changes in human hair follicles during spaceflight, hair follicles of 10 astronauts were analyzed by microarray and real time qPCR analyses. We found that spaceflight alters human hair follicle gene expression. The degree of changes in gene expression was found to vary among individuals. In some astronauts, genes related to hair growth such as FGF18, ANGPTL7 and COMP were upregulated during flight, suggesting that spaceflight inhibits cell proliferation in hair follicles. 15. Amateur Radio on the International Space Station - Phase 2 Hardware System Science.gov (United States) Bauer, F.; McFadin, L.; Bruninga, B.; Watarikawa, H. 2003-01-01 The International Space Station (ISS) ham radio system has been on-orbit for over 3 years. Since its first use in November 2000, the first seven expedition crews and three Soyuz taxi crews have utilized the amateur radio station in the Functional Cargo Block (also referred to as the FGB or Zarya module) to talk to thousands of students in schools, to their families on Earth, and to amateur radio operators around the world. Early on, the Amateur Radio on the International Space Station (ARISS) international team devised a multi-phased hardware development approach for the ISS ham radio station. Three internal development Phases. Initial Phase 1, Mobile Radio Phase 2 and Permanently Mounted Phase 3 plus an externally mounted system, were proposed and agreed to by the ARISS team. The Phase 1 system hardware development which was started in 1996 has since been delivered to ISS. It is currently operational on 2 meters. The 70 cm system is expected to be installed and operated later this year. Since 2001, the ARISS international team have worked to bring the second generation ham system, called Phase 2, to flight qualification status. At this time, major portions of the Phase 2 hardware system have been delivered to ISS and will soon be installed and checked out. This paper intends to provide an overview of the Phase 1 system for background and then describe the capabilities of the Phase 2 radio system. It will also describe the current plans to finalize the Phase 1 and Phase 2 testing in Russia and outlines the plans to bring the Phase 2 hardware system to full operation. 16. A Human Centred Interior Design of a Habitat Module for the International Space Station Science.gov (United States) Burattini, C. Since the very beginning of Space exploration, the interiors of a space habitat had to meet technological and functional requirements. Space habitats have now to meet completely different requirements related to comfort or at least to liveable environments. In order to reduce psychological drawbacks afflicting the crew during long periods of isolation in an extreme environment, one of the most important criteria is to assure high habitability levels. As a result of the Transhab project cancellation, the International Space Station (ISS) is actually made up of several research laboratories, but it has only one module for housing. This is suitable for short-term missions; middle ­ long stays require new solutions in terms of public and private spaces, as well as personal compartments. A design concept of a module appositely fit for living during middle-long stays aims to provide ISS with a place capable to satisfy habitability requirements. This paper reviews existing Space habitats and crew needs in a confined and extreme environment. The paper then describes the design of a new and human centred approach to habitation module typologies. 17. Astronaut's organ doses inferred from measurements in a human phantom outside the international space station. Science.gov (United States) Reitz, Guenther; Berger, Thomas; Bilski, Pawel; Facius, Rainer; Hajek, Michael; Petrov, Vladislav; Puchalska, Monika; Zhou, Dazhuang; Bossler, Johannes; Akatov, Yury; Shurshakov, Vyacheslav; Olko, Pawel; Ptaszkiewicz, Marta; Bergmann, Robert; Fugger, Manfred; Vana, Norbert; Beaujean, Rudolf; Burmeister, Soenke; Bartlett, David; Hager, Luke; Pálfalvi, József; Szabó, Julianna; O'Sullivan, Denis; Kitamura, Hisashi; Uchihori, Yukio; Yasuda, Nakahiro; Nagamatsu, Aiko; Tawara, Hiroko; Benton, Eric; Gaza, Ramona; McKeever, Stephen; Sawakuchi, Gabriel; Yukihara, Eduardo; Cucinotta, Francis; Semones, Edward; Zapp, Neal; Miller, Jack; Dettmann, Jan 2009-02-01 Space radiation hazards are recognized as a key concern for human space flight. For long-term interplanetary missions, they constitute a potentially limiting factor since current protection limits for low-Earth orbit missions may be approached or even exceeded. In such a situation, an accurate risk assessment requires knowledge of equivalent doses in critical radiosensitive organs rather than only skin doses or ambient doses from area monitoring. To achieve this, the MATROSHKA experiment uses a human phantom torso equipped with dedicated detector systems. We measured for the first time the doses from the diverse components of ionizing space radiation at the surface and at different locations inside the phantom positioned outside the International Space Station, thereby simulating an extravehicular activity of an astronaut. The relationships between the skin and organ absorbed doses obtained in such an exposure show a steep gradient between the doses in the uppermost layer of the skin and the deep organs with a ratio close to 20. This decrease due to the body self-shielding and a concomitant increase of the radiation quality factor by 1.7 highlight the complexities of an adequate dosimetry of space radiation. The depth-dose distributions established by MATROSHKA serve as benchmarks for space radiation models and radiation transport calculations that are needed for mission planning. 18. Space solar power stations. Problems of energy generation and using its on the earth surface and nearest cosmos Science.gov (United States) Sinkevich, OA; Gerasimov, DN; Glazkov, VV 2017-11-01 Three important physical and technical problems for solar power stations (SPS) are considered: collection of solar energy and effective conversion of this energy to electricity in space power stations, energy transportation by the microwave beam to the Earth surface and direct utilization of the microwave beam energy for global environmental problems. Effectiveness of solar energy conversion into electricity in space power stations using gas and steam turbines plants, and magneto-hydrodynamic generator (MHDG) are analyzed. The closed cycle MHDG working on non–equilibrium magnetized plasmas of inert gases seeded with the alkaline metal vapors are considered. The special emphases are placed on MHDG and gas-turbine installations that are operating without compressor. Also opportunities for using the produced by space power stations energy for ecological needs on Earth and in Space are discussed. 19. Terrestrial whisker growth experiments which anticipate some special effects of a space station environment Science.gov (United States) Hobbs, H. H. 1983-01-01 The effects of the absence of gravitationally driven thermal convection on the growth of whiskers by chemical reduction of metal salts was studied. It was possible to accomplish nearly complete suppression of such convection. Suppression of the convection does indeed effect the growth but in subtle, not necessarily detrimental ways: none of the changes observed were such as to hamper efforts to produce whiskers in space. Copper whiskers grown from cuprous iodide respond most positively to the suppression of convection; therefore, they are strongly recommended for tests in the space environment. Cobalt whiskers grown from cobaltous bromide show the greatest independence from conditions of convection and applied electric fields of any material studied; therefore, this medium is highly recommended. A strong pulse of electric field forces the whiskers to stick to the growth vessel top plate, this facilitates study or "harvesting'. On the space station it is recommended that the growth vessels be mounted outside the laboratory and joined with the station by means of double vacuum valves and gas service lines. 20. Time Effects, Displacement, and Leadership Roles on a Lunar Space Station Analogue. Science.gov (United States) Wang, Ya; Wu, Ruilin 2015-09-01 A space mission's crewmembers are the most important group of people involved and, thus, their emotions and interpersonal interactions have gained significant attention. Because crewmembers are confined in an isolated environment, the aim of this study was to identify possible changes in the emotional states, group dynamics, displacement, and leadership of crewmembers during an 80-d isolation period. The experiment was conducted in an analogue space station referred to as Lunar Palace 1 at Beihang University. In our experiment, all of the crewmembers completed a Profile of Mood States (POMS) questionnaire every week and two group climate scales questionnaires every 2 wk; specifically, a group environment scale and a work environment scale. There was no third-quarter phenomenon observed in Lunar Palace 1. However, fluctuations in the fatigue and autonomy subscales were observed. Significant displacement effects were observed when Group 3 was in the analogue. Leader support was positively correlated with the cohesion, expressiveness, and involvement of Group 3. However, leader control was not. The results suggest that time effects, displacement, and leadership roles can influence mood states and cohesion in isolated crew. These findings from Lunar Palace 1 are in agreement with those obtained from Mir and the International Space Station (ISS).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535740315914154, "perplexity": 5316.383282855008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00180.warc.gz"}
http://link.springer.com/article/10.1007%2Fs11192-009-0047-5
Scientometrics , Volume 82, Issue 2, pp 391–400 # hg-index: a new index to characterize the scientific output of researchers based on the h- and g-indices • S. Alonso • F. J. Cabrerizo • E. Herrera-Viedma • F. Herrera Article DOI: 10.1007/s11192-009-0047-5 Alonso, S., Cabrerizo, F.J., Herrera-Viedma, E. et al. Scientometrics (2010) 82: 391. doi:10.1007/s11192-009-0047-5 ## Abstract To be able to measure the scientific output of researchers is an increasingly important task to support research assessment decisions. To do so, we can find several different measures and indices in the literature. Recently, the h-index, introduced by Hirsch in 2005, has got a lot of attention from the scientific community for its good properties to measure the scientific production of researchers. Additionally, several different indicators, for example, the g-index, have been developed to try to improve the possible drawbacks of the h-index. In this paper we present a new index, called hg-index, to characterize the scientific output of researchers which is based on both h-index and g-index to try to keep the advantages of both measures as well as to minimize their disadvantages. ### Keywords h-Index g-Index Bibliometric indicators Research evaluation ## Authors and Affiliations • S. Alonso • 1 • F. J. Cabrerizo • 2 • E. Herrera-Viedma • 3 • F. Herrera • 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135675191879272, "perplexity": 1843.1451483884598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00252-ip-10-145-167-34.ec2.internal.warc.gz"}
https://indico.cern.ch/event/433345/contributions/2358342/
# Quark Matter 2017 Feb 5 – 11, 2017 Hyatt Regency Chicago America/Chicago timezone ## Photon emission at hadronization: possible resolution to the direct photon puzzle Feb 7, 2017, 11:20 AM 20m Regency C ### Regency C Oral Electromagnetic Probes ### Speaker Kazunori Itakura (KEK) ### Description We discuss photon emission at the stage of hadronization as a possible resolution to the direct-photon puzzle. In an ordinary plasma, it is well known that photon emission occurs when a plasma goes back to a normal state through recombination processes such as $e^- + p^+ \to H + \gamma$ for an electron-proton plasma. This is called the “radiative recombination”. A similar process should take place when a QGP hadronizes. For example, meson formation from a quark and an antiquark will be accompanied by photon emission $q + \bar q \to meson + \gamma$ to compensate the energy difference between the initial and final states. In order to compute the number of photons emitted at hadronization, we employ the “recombination model” developed by the Duke group. There, the number of produced hadrons is computed under the assumption that coalescence of valence (anti)quarks just occurs without emission of additional particles, which surely violates energy and entropy conservation. With the photon emission added in this coalescence process, however, energy and entropy can be made conserved. We reinterpret the production formula of hadrons in the original recombination model as that of artificial “resonant states” whose invariant masses are not necessarily equal to the masses of any physical hadrons. We further assume that the “resonant state” decays into a physical hadron and a photon. This “improved” recombination model has a potential to resolve the direct-photon puzzle: (1) a larger yield of photons since we add photon production at hadronization, which has been overlooked so far, and (2) radiated photons flow similarly as hadrons because photons are emitted in a collimated way with the resonant state's motion. Moreover, the pt distribution of emitted photons mimics thermal distribution whose effective temperature is essentially given by blue-shifted quark’s temperature and thus becomes much higher than critical temperature. Preferred Track Electromagnetic Probes Not applicable
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937567710876465, "perplexity": 1722.929339523482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00486.warc.gz"}
https://rd.springer.com/article/10.1007/s10479-021-03951-2?error=cookies_not_supported&error=cookies_not_supported&code=65eb1335-4ad7-4ba7-be72-0226075fcf18&code=694d3dd4-3f69-41c6-b204-72bb1b2e8039
# Clusters of high-dimensional interval data and related Boolean functions of events in Euclidean space ## Abstract Clustering interval data has been studied for decades. High-dimensional interval data can be expressed in terms of hyperrectangles in $$\mathbb {R}^d$$ (or d-orthotopes) in case of real-valued d-attributes data. This paper investigates such high-dimensional interval data: the Cartesian product of intervals, or a vector of interval. For the efficient computation of related Boolean functions, some interesting aspects have been discovered using vertices and edges of the graph, generated from given events. We also study the lower and upper-bounded orthants in $$\mathbb {R}^d$$ as events for which we show the existence of a polynomial-time algorithm to calculate the probability of the union of such events. This efficient algorithm has been discovered by constructing a suitable partial order relation based on a recursive projection onto lower-dimensional spaces. Illustrative real-life applications are presented. This is a preview of subscription content, access via your institution. ## References 1. Agarwal, A., Hosanagar, K., & Smith, M. (2008). Location, location, location: An analysis of profitability of position in online advertising markets. Journal of Marketing Research, 48, 1057–1073. 2. Boole, G. (1854). Laws of thought. New York: Dover. 3. Boole, G. (1868). Of propositions numerically definite. Trans Cambridge Philos Soc, Part II, XI pp 396–411. 4. Boros, E., & Prékopa, A. (1989). Closed form two-sided bounds for probabilities that exactly $$r$$ and at least $$r$$ out of $$n$$ events occur. Mathematics of Operations Research, 14, 317–342. 5. Boros, E., Scozzari, A., Tardella, F., & Veneziani, P. (2014). Polynomially computable bounds for the probability of the union of events. Mathematics of Operations Research, 39(4), 1311–1329. 6. Boyd, S., & Vandenberghe, L. (2018). Introduction to applied linear algebra. Cambridge: Cambridge University Press. 7. Bukszár, J., & Prékopa, A. (2001). Probability bounds with cherry trees. Mathematics of Operations Research, 26(1), 174–192. 8. Bukszár, J., & Szántai, T. (2001). Probability bounds given by hypercherry trees. Alkalmaz Mat Lapok, 19, 69–85. 9. Chan, T. M. (2011). Persistent predecessor search and orthogonal point location on the word ram. In SODA ’11. 10. Hailperin, T. (1965). Best possible inequalities for the probability of a logical function of events. The American Mathematical Monthly, 72, 343–359. 11. Hunter, D. (1976). Bounds for the probability of a union. Journal of Applied Probability, 13, 597–603. 12. Iacono, J., & Langerman, S. (2000). Dynamic point location in fat hyperrectangles with integer coordinates. In CCCG. 13. Jordan, C. (1867). Mémoire sur la résolution algébrique des équations. Journal de Mathématiques pures et appliquées, 12, 109–157. 14. Kruskal, J. (1956). On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical Society, 7, 48–50. 15. Lee, J. (2017). Computing the probability of union in the $$n$$-dimensional Euclidean space for application of the multivariate quantile: $$p$$-level efficient points. Operations Research Letters, 45(3), 242–247. 16. Lee, J., & Choi, P. M. S. (2020). Chain of Antichains: An efficient and secure distributed ledger, Springer Singapore, Singapore, pp 19–58. https://doi.org/10.1007/978-981-15-2205-5_2. 17. Lee, J., & Kim, J. (2019). Partially ordered data sets and a new efficient method for calculating multivariate conditional value-at-risk. Annals of Operations Research,. https://doi.org/10.1007/s10479-019-03366-0. 18. Lee, J., & Prékopa, A. (2017). On the probability of union in the n-space. Operations Research Letters, 45(1), 19–24. 19. Miklosik, A., Kuchta, M., Evans, N., & Zak, S. (2019). Towards the adoption of machine learning-based analytical tools in digital marketing. IEEE Access, 7, 85705–85718. 20. Pelleg, D., & Moore, A. (2001). Mixtures of rectangles: Interpretable soft clustering. In ICML. 21. Prékopa, A. (1988). Boole–Bonferroni inequalities and linear programming. Operational Research, 36(1), 145–162. 22. Prékopa, A. (1990a). Sharp bounds on probabilities using linear programming. Operational Research, 38(2), 227–239. 23. Prékopa, A. (1990b). The discrete moment problem and linear programming. Discrete Applied Mathematics, 27, 235–254. 24. Prékopa, A. (1995). Stochastic programming. Amsterdam: Kluwer Academic Publishers. 25. Prékopa, A. (2003). Probabilistic programming. Hand books in Operations Research and Management Science (Ruszczyński, A and Shapiro, A, Eds), 10, 267–351. 26. Scozzari, A., & Tardella, F. (2018). Complexity of some graph-based bounds on the probability of a union of events. Discrete Applied Mathematics, 244, 186–197. 27. Souza, R., & Carvalho, F. (2004). Clustering of interval data based on city-block distances. Pattern Recognition Letters, 25, 353–365. 28. Strang, G. (2019). Linear Algebra and Learning from Data. Wellesley - Cambridge Press. 29. Suzuki, S., & Ibaraki, T. (2004). An average running time analysis of a backtracking algorithm to calculate the measure of the union of hyperrectangles in $$d$$ dimensions. In CCCG. 30. Worsley, K. (1982). An improved Bonferroni inequality and applications. Biometrika, 69, 297–302. 31. Yang, Y., & Padmanabhan, B. (2005). Ghic: A hierarchical pattern-based clustering algorithm for grouping web transactions. IEEE Transactions on Knowledge and Data Engineering, 17, 1300–1304. ## Acknowledgements It is an honor for the first author to have his academic father, Professor András Prékopa (1929–2016) as a second author of this paper. This paper’s main topic: the probability of Boolean functions of high dimensional interval data, was studied in 2019 - 2020 solely by the first author, and he presented the main idea of this paper at ISAIM (International Symposium of Artificial Intelligence and Mathematics) in January 2020 in Fort Lauderdale, Florida. Working on Boolean functions of hyperrectangles and related binomial moment problem formulation was initially suggested by Professor Prékopa in May 2016. The first author dearly misses him. ## Author information Authors ### Corresponding author Correspondence to Jinwook Lee. ### Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. András Prékopa: Deceased 18 September 2016. ## Rights and permissions Reprints and Permissions Lee, J., Prékopa, A. Clusters of high-dimensional interval data and related Boolean functions of events in Euclidean space. Ann Oper Res (2021). https://doi.org/10.1007/s10479-021-03951-2 • Accepted: • Published: ### Keywords • Clustering • Multivariate interval data • Orthant • Hyperrectangle • Graph • Spanning tree • Boolean functions • Euclidean space • Probability bounds
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7938439846038818, "perplexity": 4280.736561334054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00615.warc.gz"}
http://fsaad.scripts.mit.edu/randomseed/
Fun with two-sample testing Statistical two-sample testing concerns the following question: given observations $X = \lbrace x_1, x_2, \dots, x_m \rbrace$ and $Y = \lbrace y_1, y_2, \dots, y_n \rbrace$ drawn from distributions $p$ and $q$, respectively, can we decide whether $p=q$? Two-sample testing is an active field of research with an enormous amount of literature. Asymptotic theory of two-sample tests is typically concerned with establishing consistency, meaning that the probability of declaring $p=q$, when in fact $p \ne q$, goes to zero in the large sample limit. Distinguishing distributions with finite number of samples A very nice (negative) result is that it is generally impossible to distinguish two distributions with high probability, by only observing a finite pair of samples of fixed size. The result is also somewhat disappointing for those who are particularly interested in theoretical guarantees of statistical procedures in applied inference, where the ideal of asymptotia is a far reach the chaotic array of csv files and other inhomogeneous data sources sitting on the analyst’s hard drive. Anyway, consider the following scenario, which is from Gretton, et al. “A Kernel Two-Sample Test.” JMLR. 2012. (a highly recommended paper to read): Assume we have a distribution $p$ from which have drawn $m$ iid observations. Construct a distribution $q$ by drawing $m^2$ iid observations from $p$ and define a discrete distribution over these $m^2$ observations with probability $m^{-2}$ each. It is easy to check that if we now draw $m$ observations from $q$, there is at least a ${m^2 \choose m}\frac{m!}{m^{2m}} > 1 – e^{-1} > 0.63$ probability that we thereby obtain an $m$ sample from $p$. Hence, no test will be able to distinguish samples from $p$ and $q$ in this case. The probability of detecting can be made arbitrarily small by increasing the size $m$ of the sample from which we construct $q$. To understand why this setup implies the impossibility of two-sample testing with high probability in the finite setting, we need to think about the frequentist properties of the procedure. Suppose we repeat the experiment a very large number of times, say 900,000 times. Marginalizing over all experiments, the samples drawn from $q$ will follow a different distribution than samples drawn from $p$. However, in roughly 2/3 or 600,000 of the experiments, the observation set drawn from $q$ is a random sample from $p$. Conditioned on being in one of the 600,000 trials where the $m$ samples are from the same distribution, an ideal two-sample test should report $p=q$, which is committing a Type II error with arbitrarily large probability. Now, the authors left the derivation of the combinatorial bound ${m^2 \choose m}\frac{m!}{m^{2m}}$, for the probability that the $m$ draws from $q$ are an $m$ sample from $p$, as an exercise for the reader. It actually makes a nice homework or exam question for a 101 probability course (as do many other byproducts of academic papers, so I have learned). For those interested in elementary probability and combinatorics, here is a candidate derivation. First note ambiguities in the setup of the experiment: (i) Is $p$ intended to be a continuous distribution, so that there are no duplicates in the $m^2$ samples? (ii) What exactly is the definition of the event “thereby obtain an $m$ sample from $p$”? To make progress, we assume that (i) $p$ is continuous and the $m^2$ samples in $q$ are unique, and (ii) an “$m$ sample from $p$” means that the resampled sequence contains no duplicates (which occurs with probability zero when sampling directly from $p$). With these assumptions, the bound becomes trivial. The size of the sample space is equal to the number of length $m$ strings from $\lbrace x_1, \dots, x_{m^2} \rbrace$, which is $(m^2)^m$. The number of sequences of length $m$ with unique entries is ${m^2 \choose m}m!$. Dividing the latter quantity by the former gives the bound. If the assumptions above are too strong and $p$ is discrete, then I am no longer sure what the definition of an “$m$ sample from $p$” is. Perhaps someone can let me know if alternative meanings are clear to them. Discreteness of $p$ would indeed provide a much more compelling example, since one can arrive at arbitrary results based on pathologies of real number being uncomputable. A simpler counter-example with infinite number of samples, and a paradox? To the computer scientist, real numbers are uninteresting. Let us suppose $p$ is a discrete distribution with finite support, and additionally restrict its probability density at each point in the support to be a rational number. One might simplify the above experiment by defining $q_m$ to: • generate a size $m$ sample from $p$, with probability $1-1/m$; • generate a size $m$ sample from an arbitrary distribution, with probability $1/m$. Here is an interesting question — is there any test which can have non-zero power for testing $p$ versus $q_m$? Note that it is always the case that $p \ne q_m$. The probability of detection is $1/m$, which decays to zero as $m \to \infty$. Therefore, the Type II error necessarily goes to one, irrespective of the testing procedure. More confusing is the fact that $q_m$ converges to $p$ as the sample size increases. There seems to be a paradox underlying this experiment — how can we resolve it? (Hint: consider whether the frequentist/large-sample intuition of two-sample testing is coherent with our construction of the limiting process for testing $p$ against the sequence of distributions $(q_m)$). A topic “I want” to discuss Having wants and expressing them to others is a very natural thing to do. Discussions with friends about plans, objectives, or ambitions typically involve use of the phrase “I want” and summaries of a position in terms of “wanting” X or Y. In discussions that involve conflict between two parties, the conflicting wants are left unstated, and each side will instead argue in favor of the consequences of their wants. Given how central “wants” are to dictating our behavior (consider how a typical Economics 101 textbook will begin with basic explanations of “wants” and “needs” as the motivators for decision-making), it is worth thinking about how we can think about our own wants, and how we can communicate them effectively to others. The term “want” is vague. It conveys a primitive emotion rooted somewhere unknown in our cognitive processes. In cognitive science, the notion of “want” relates to a concept named motivational salience (from Wikipedia): Motivational salience is a cognitive process and a form of attention that motivates, or propels, an individual’s behavior towards or away from a particular object, perceived event, or outcome. Motivational salience regulates the intensity of behaviors that facilitate the attainment of a particular goal, the amount of time and energy that an individual is willing to expend to attain a particular goal, and the amount of risk that an individual is willing to accept while working to attain a particular goal. In particular, “want” relates to incentive salience, which is an attractive form of motivational salience that causes approach behavior toward an objective: Incentive salience is a cognitive process which confers a “desire” or “want” attribute, which includes a motivational component, to a rewarding stimulus. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior — also known as approach behavior — and consummatory behavior. The “wanting” of incentive salience differs from “liking” in the sense that liking is the pleasure that is immediately gained from the acquisition or consumption of a rewarding stimulus; the “wanting” of incentive salience serves a “motivational magnet” quality of a rewarding stimulus that makes it a desirable and attractive goal, transforming it from a mere sensory experience into something that commands attention, induces approach, and causes it to be sought out. The above description of wants in terms of stimuli, goals, sensory experiences, risk, time, energy, attention, and rewards shows why summarizing an idea simply as “I want X” carries low information content. It leaves the main aspects that characterize the “want” almost entirely unaddressed. And while many of these aspects may be implicit — surely some wants are just obvious — the question then becomes, why are they obvious? Proposition: when expressing an idea (usually some combination of a request, objective, and plan) which can be loosely phrased as “I want X”, we can avoid the term “I want” and draw out an explanation as follows: • the suggestion is X; • the objective is Y; • the investment effort is W; • the reason is Z. This form of thinking may appear rather mechanical, and perhaps even frustrating, to express a simple statement such as “I want X”. More so, the terminal “reason Z” can itself be a “want”, which then necessitates a recursive evaluation of the rule, up to some point of satisfaction or satiation. The exercise of investigating a “want”, which may resemble a form of Socratic dialogue with onesself, will often bottom out at the qualities at the core of our human characteristics, most of which are poorly understood — on the positive side: benevolence, empathy, security, rationality, or harmony; on the negative side: fear, insecurity, greed, jealousy, irrationality, or selfishness; on the neutral side: progression, efficiency, aesthetic, ambition, or curiosity. A methodological unpacking of the bases of our “wants” is therefore fundamental for arriving at carefully examined intentions and motivations. It is not uncommon to fixate on a given desire, yet struggle to express why the fixation exists in the first place. Does the “want” arise from a need, from external pressures, from a projection, from an illusion? How much resources are to be allocated in pursuit of a want, and which resources need to be traded-off? What principles will need to be reinforced, and which will have to be relaxed? What is the fall-back for the event that the “want” turns out to be not as desired as originally anticipated? One might say that the propositions above are uninterestingly obvious, or a mere list of truisms. However, there is a strong distinction between knowing these ideas in the safety of the abstract, and deploying these techniques with full force and discipline in the danger of practice: personal disputes, workplace rivalries, corporate warfare, partisan politics, international conflict. We should be ready to accept that the examination of a “want” may lead to uncomfortable resolutions. At the same time, this discomfort is necessary. It is necessary to be maximally honest with ourselves; necessary to build emotional and intellectual stability; and necessary to foster cooperation with other agents who have conflicting and competing wants of their own. The view from outside: reading international newspapers It is natural for people to consume media products from the country they live in. For example, living in the United States, we often draw the bulk of our news about global affairs from sources such as CNN, Fox News, The Washington Post, The New York Times, and so on. Google News, a popular and powerful news aggregator, by default will present top results for the “World” section from only a collection of the most popular news outlets from the nation of the detected IP address (as an exercise, visit to Google News, click on “World” in the left panel, and explore for yourself). It is hardly surprising that US news outlets present world events (not just analysis, but also reporting) from a largely American perspective (a term which happens to have a name: Americentrism). But in the age of the internet, there is little excuse to exclusively consume world news from a restricted set of US-based media houses. It is unfortunate that reading, let alone being able to name, English newspapers from around the world is quite uncommon. The outcome is a blind-spot to the perspectives and sentiments of cultures and communities we read and make judgments about. Reading a news piece from an international newspaper often evokes one of the two responses for me: • An appreciation of the neutrality of the presentation of world events. As an example, US reporting on foreign nations regularly contains assessments about which regimes are democratic and friendly, versus autocratic and antagonistic. Furthrmore, most international news will tie the event to US foreign policy. Both these patterns are less pronounced at non-US news desks, especially those which are not global players, and therefore have less relation to a particular foreign event. • A surprise by how propagandist their depiction of world events can appear. (A common signal is observing how, and in reference to which groups, a news sources will use the word “terrorist” versus “rebel”). We are accustomed to reading about world events from a fixed perspective, that foreign ones often come across strangely worded and manipulative. It takes effort to recognize that the issue goes both ways, and US media is likely subject to the same pitfalls when evaluated from outside. Below are a collection of some English, international media sources which I enjoy reading during the early morning rounds, and have helped me appreciate the variety of interpretations which are not apparent from reading one set of news sources alone: There are obviously thousands of news sources (and the above list misses African or Latin American spheres, parts of the world I remain woefully uniformed about), but even a small sample from outside our bubble can broaden one-dimensional views of the greater world outside. Is the phrase “can you [do X]?” considered a polite way to request something in written communication on the internet, or perhaps even in American English more generally? It is a common phrase I encounter in semi-professional communication through e-mails, Slack, etc. It almost always appears without an associated “please”. The most confusing aspect is that, in its typical usage “can you [do X]” is hardly a question with a reasonable yes/no answer (e.g. “can you send me this file?”), it is instead a straightforward request. I propose the following rule of thumb: ask yourself, is it easy to replace “can you” by “are you able to” without sounding overly sarcastic? If not, then a more accurate way to ask “can you [do X]?” is instead to say “please [do X]”. I hypothesize that “can you [do X]?” is indeed intended to sound like a less demanding way to request something, by situating the request as a question rather than a command. However, the phrase has a few shortcomings: • regardless of intent, it still is missing “please”, (which is important when many small requests add up and eat away at time!); • in most contexts it is not really a question. Why is America experiencing a frenzy of Russophobia? Note: While writing this post I came across Cathy Young’s op-ed at Forward.com: Red Scare: Trump and Democrats Alike Fan Paranoia Regarding Russia. I suggest reading Young’s thoughts on the issue for an alternative analysis. I will weigh in below with my own perspective on the Russia affair. There has been an upward trend of anti-Russian sentiment among the American public, growing at an unstoppable pace since news first broke last June of Russia hacking the DNC. Morning shows, newspaper front-pages, hashtags on social networks: the propaganda machine is at full-steam driving the renewed national phobia of the great threat and evil known as Russia. There are at least three largely overlapping but distinct interest-groups which are central to the anti-Russia info-wars. Democrats are seething after being obliterated on all fronts in last November’s election, losing the House, the Senate, the presidency, and several state governorships. They are therefore engaging in an all-out campaign to undermine the legitimacy of the Trump administration. Democrats are also upset at being hacked, with emails revealing the level of corruption at the core of the DNC and their shameful treatment of Senator Bernie Sanders. Quite paradoxical is the bitterness that Democrat supports reserve toward Russia, and the complete silence regarding the fraudulent actions of their party leaders at the DNC. This pattern of turning a blind eye one oneself and spewing virulence toward political opponents is consistent with the la-la land doctrine of “good guys” versus “bad guys” that dominates partisan America. Antagonizing Russia to undermine Trump is an unsurprising and understandable political strategy. It is also neither new nor likely to be particularly fruitful. The Tea Party and similar elements adopted an identical strategy when challenging Obama’s legitimacy via the birther movement circa 2010. Conspiracies about his Kenyan/Islamic/Martian origins, led in no small part by Trump himself, were endless. Birtherism became a national amusement. Republican media, spearheaded by Fox & Friends, thrived by fostering a culture of mean-spirited and destructive accusations toward Obama. I cannot count many democracies where undermining the legitimacy of political opponents is common and accepted practice. The foreign policy establishment. (Aside: I believe that what meager information we read in the media and press about the inner working of current government is <1% accurate and representative of actual events, so the following is pure speculation.) The Trump administration has surprised Washington’s foreign policy establishment, which for decades has been led by neoconservative politicians and lobby groups, by significantly marginalizing the State Department. Neoconservatives, who count both prominent Democrats (e.g. Victora Nuland, Obama’s great choreographer of the Ukrainian theatre) and Republicans (Sen. John McCain has been leading the fear-mongering recently) among their ranks, are playing second fiddle to Trump’s military generals at the Pentagon. My sense is that they are fabricating a foreign policy crisis with Russia in an attempt to revitalize their own political relevance. In the meantime, the Americans, Russians, and Turks are engaging in security coordination against ISIS and Syria’s civil war. While coverage of the deployment of 400 Marines into Manbij, Syria has been scant in US media, it is a significant military and diplomatic breakthrough signalling a change in US attitudes toward a solution in Syria. The national news media. People enjoy hating on Russia. It serves as an external enemy to vent against, and to confirm the exceptionalism of our democratic principles and moral character. Under the Orwellian slogan of “War is Peace”, this tactic is unsurprising. Particularly relevant is that, under the capitalist system, media houses are for-profit corporations acting under severe pressure of financial survival and political agendas. If stories about Russia continue to sell pageviews and drive up viewership, then they are guaranteed non-stop coverage. The longevity of the Russia connection is tied only to how long the media can keep up interest. The saga will be drawn out with series of unexpected twists and turns. But interest will soon wane, and then our attention will be moved to worry about the next earth-ending crisis. I am not an expert on Russian culture, principles, or ways-of-life. As a non-citizen of Russia, and having no connection to the country or daily experiences of its regular people. I do not believe we are empowered to make value judgments about their political system and values. It is tempting to label other regimes as dictatorial and antagonistic, but I believe this judgement is only for people to make about themselves. A thought experiment with the Bayesian posterior predictive distribution Let $\pi(\theta)$ be a prior for parameter $\Theta$, and $p(x|\theta)$ a likelihood which generates an exchangeable sequence of random variables $(X_1,X_2,X_3\dots)$. Given a set of observations $D := \lbrace X_0=x_0, X_1=x_1, \dots, X_{N-1}=x_{N-1}\rbrace$, the posterior predictive distribution for the next random variable in the sequence $X_N$ is defined as $$p(X_{N}=s | D) = \int p(X_{N}=s | D,\theta) \pi(\theta|D)d\theta = \int p(X_{N}=s|\theta) \pi(\theta|D)d\theta,$$ where the second equality follows from assuming the data is exchangeable (or i.i.d conditioned on latent parameter $\theta$). The posterior predictive density evaluated at ${X_{N}=s}$ is an expectation under the posterior distribution $\pi(\theta|D)$. Define the function $g(s,\theta) := p(X_{N}=s|\theta)$, (note that $g$ does not depend on the index $N$ of the random variable since $\theta$ is known), and then compute the expectation of the random variable $g(s,\Theta)$ under $\pi(\theta|D)$, $$p(X_N=s | D) = \mathbb{E}_{\pi(\cdot|D)}\left[ g(s,\Theta) \right].$$ Now consider the case where each random variable $X_i$ is a two-dimensional vector $X_i = (X_{[i,1]}, X_{[i,2]}).$ The data $D = \lbrace X_0=x_0, X_1=x_1, \dots, X_{N-1}=x_{N-1}\rbrace$ is thus an exchangeable sequence of bivariate observations. (Assume for simplicity that marginalizing and conditioning the joint distribution $p(X_{[i,1]},X_{[i,2]}|\theta)$ are easy operations.) We again perform inference to obtain the posterior $\pi(\theta|D)$. Suppose we wish to evaluate the probability (density) of the event $\lbrace X_{[N,1]}=s \mid X_{[N,2]}=r \rbrace$ under the posterior predictive. I am in two minds about what this quantity could mean: Approach 1 Define the conditional probability density again as an expectation of a function of $\Theta$ under the posterior distribution. In particular, let the probe function $g(s,r,\theta) := p(X_{[N,1]}=s|X_{[N,2]}=r,\theta)$ (recalling that $g$ does not depend on $N$ when $\theta$ is known) and then compute the expectation of $g(s,r,\Theta)$ under $\pi(\theta|D)$, $$p_{\text{approach 1}}(X_{[N,1]}=s|X_{[N,2]}=r,D) = \mathbb{E}_{\pi(\cdot|D)}\left[ g(s,r,\Theta) \right].$$ Approach 2 Define the desired conditional probability density by application of the Bayes Rule. Namely, separately compute two quantities joint density: $p(X_{[N,1]}=s,X_{[N,2]}=r|D) = \int p(X_{[N,1]}=s,X_{[N,2]}=r|\theta) \pi(\theta|D)d\theta$ marginal density: $p(X_{[N,2]}=r|D) = \int p(X_{[N,2]}=r|\theta) \pi(\theta|D)d\theta$ and then return their ratio, $$p_{\text{approach 2}}(X_{[N,1]}=s|X_{[N,2]}=r,D) = \frac{p(X_{[N,1]}=s,X_{[N,2]}=r|D)}{p(X_{[N,2]}=r|D)}.$$ Note that Approach 2 is equivalent to appending the condition $\lbrace X_{[n,2]}=r \rbrace$ to the observation set $D$ so that $D’ := D \cup \lbrace X_{[N,2]}=r \rbrace$ and the new posterior distribution is $\pi(\theta|D’)$. It then computes the expectation of $g(s,r,\Theta)$ under $\pi(\cdot|D’)$, $$p_{\text{approach 2}}(X_{[N,1]}=s|X_{[N,2]}=r,D) = \mathbb{E}_{\pi(\cdot|D’)}\left[ g(s,r,\Theta) \right]$$ Exercise: Show why the two expressions for $p_{\text{approach 2}}$ are equivalent (or let me know if I made a mistake!) Thoughts The question is thus, does the Bayesian reasoner update their beliefs about $\theta$ based on the condition ${X_{[N,2]}=r}$? I think both approaches can make sense: In Approach 1, we do not treat $\lbrace X_{[N,2]}=r \rbrace$ as a new element of the observation sequence $D$; instead we define the probe function $g(s,r,\theta)$ based on the conditional probability (which is a function of the population parameter), and then compute its expectation. Approach 2 follows more directly from the “laws of probability” but is less interpretable from the Bayesian paradigm. Why? Because if ${\Theta = \theta}$ is known, then $p(X_{[N,1]}=s|X_{[N,2]}=r,\theta)$ is just a real-number — since the Bayesian does not know $\theta$, they marginalize over it. But it is unclear why the probe function $g(s,r,\theta)$ should influence the distribution of $\pi(\theta|D)$, regardless of whether it happens to represent a density parameterized by $\theta$. Next Steps Perhaps I should numerically/analytically compute the difference between Approach 1 and Approach 2 for a bivariate Gaussian with known covariance and unknown mean. For simplicity, just use the prior predictive, letting $D=\varnothing$. Trump knew it, Hillary blew it — the failures of polling Election season has come and gone. Donald Trump pulled off what has been repeatedly characterized as a “stunning upset” over bitter rival Hillary Clinton. The web has gone rampant with postmortem analysis about the failures of election polling. But was it really that stunning? Several polls conducted in the lead up to the election reported on the virtually deadlocked race, all well-within any reasonable margin of error: Quinnipiac University reported on the situation in key battleground states (Nov 2) Democrat Hillary Clinton’s October momentum comes to a halt as she clings to a small lead in Pennsylvania, while Republican Donald Trump moves ahead in Ohio, leaving Florida and North Carolina too close to call. Probability forecast models on the other hand were remarkably off-mark and predicted Clinton well-ahead just the night before the election: • New York Times Upshot: Clinton 84%, Trump 16% • FiveThirtyEight: Clinton 66.9%, Trump 33% • PredictWise: Clinton 89%, Trump 11% I was watching the blitz of last-ditch rallies held by Trump and Clinton the night before election day, to learn about the sentiments they were expressing about their chances. Here is a revealing segment from Trump’s penultimate war cry in New Hampshire (8pm on Nov 7): We are going right after this to Michigan, because Michigan is in play… The polls just came out: we are leading in Michigan; we are leading in New Hampshire; we are leading in Ohio; we are leading in Iowa; leading in North Carolina; I think we are doing really, really well in Pennsylvania; and I do believe we are leading in Florida. In the meantime, according to the New York Times: Mrs. Clinton’s campaign was so confident in her victory that her aides popped open Champagne on the campaign plane early Tuesday. Either way, each candidate and their popular base was clearly happy to live in their own reality right up to the wire. Some personal take-aways from this whole affair: • Polling and forecasting is a messy, complex, empirical problem which next-to-nobody understands. I doubt it is statistical. • Well-informed voters do not derive the bulk of their information from cursory reading of social or national news media. They are vigilant about critiquing every aspect of information they consume. Otherwise, they are very, very sad. There used to be a time when journalism was a profession. In the information age, however, the production and consumption of public media has become a long gone art. Data streams chaotically and continuously from all directions through the social networks — Facebook, Witter, Slacks, SMS — and it is a mystery how anyone can distill a meaningful signal from the noise. One might rather call it the mis-information age. One can also write a volume, critiquing the sloppiness in today’s written content. For this post, let us focus right at the start; headlines. The purpose of a headline is to serve as a useful, succinct summary for the content of an article. Many headlines on the web are instead extremely predictable, repetitive, and often appear to be pulled right out of a book called “Click Bait 101”. I recently browsed through the front page of Google News, and selected an assortment of representative headlines that echo some of the most recurring motifs. Everything you need to know about … These articles are a hold-my-hand guide through a dangerously oversimplified presentation of some complex issue. They usually receive very high number of comments and page views. I attribute most the blame for the success of this headline to the intellectual laziness of readers, who wish to quickly be knowledgeable and form opinions about a topic that society agrees is important. It is also troubling that the writer is confident that they are telling you “everything you need to know about…” It would be more honest and accurate to rephrase as “some things we want you to know about…” Here are some examples. • Everything You Need to Know About Britain’s New Prime Minister. ABC • Everything You Need to Know About K2, the Drug Linked to Mass Overdose. NBC • Everything You Need to Know About Today’s GOP Rules Committee Meeting. ABC • Everything you need to know about the net neutrality debate in India. India Times • Everything you need to know about Theresa May’s Brexit nightmare in five minutes. Politics UK The last one is the exemplar — promising a comprehensive coverage of the Brexit in five minutes of your valuable time. [someone] just did [something shocking]! Intended for maximum shock and urgency. The [something shocking] event is typically a straight-out-deception of something trivial. The primary purpose of the headline is rather to make a statement about, or build a persona for, the [someone]. Examples: • Bernie Sanders Just Made Jill Stein The Most Powerful Woman In American Politics. Huffington Post • The FDA Just Declared War on Cookie Dough. Smithsonian • Putin Just Created His Own Personal Army. Daily Caller • Vladimir Putin just invited Kim Jong Un to visit Russia. Really. Washington Post Even though a given reader may not read much about Putin, glancing through enough headlines like these work to build the negative, dictatorial character that is widespread in the West today. And who has the remotest idea about who is Jill Stein? Maybe she is too busy working in the Earth’s core and shifting tectonic plates… [Yes, No, Sorry], … These headlines sound like bitter responses in an argument on a Youtube comment forum. For each of these examples, consider how much more readable and professional the title would be, without the silly bolded words. • No, Bernie Sanders Did Not Sell Out by Endorsing Hillary Clinton — Just the Opposite. Forward • Sorry, You’re Just Going To Have to Save More Money.​ Wall St. Journal • Sorry 538, La Taqueria is delicious — but it’s also fatally flawed.​ Vox • Yes, Clinton is sinking in the polls. No, you should not panic. Here’s why.​ Washington Post Some similar prefixes to look out for the future are “Right,” and “Wait,”. It is pretty surprising they have not been picked up already. Conclusion The next time you come across a headline along these lines, think about why the writer chose that particular pattern, what emotional or intellectual reaction they are trying to evoke, and how it contributes to their propaganda. Although quite simple, my name is not easy to pronounce in English (even for me). Older people sometimes say “ferris” like a ferris wheel (perhaps in reference to the famous 80s movie Ferris Bueller’s Day Off). I am not particularly bothered by the different permutations either way, but it can be inconvenient at times. When visiting a coffee shop, or engaging in any other unimportant event where a name is required for reference, I typically give the simplest two-letter name: “Jo”. It is impossible to cause any confusion or unnecessary back-and-forth exchange. The strategy usually works fine (although on rare occasion I stare blank-faced at the poor barista, yelling repetitively for “Jo” and wondering why I am ignoring them). More recently I have decided to have a bit more fun with using names. The last few times I was grabbing a coffee with a friend, I would use their name at the counter instead. What I found most surprising is the unexpected response it stirred — a mix of confusion, defensiveness, and a feeling of being insulted: Friend: “That is my name! What is wrong with you? Me: Relax. (Always fun to tell someone getting worked up to relax) Friend: “No! You cannot just use my name! A name is a highly personal matter. I wonder what set of experiments one can design to formally study the question: to what extent do people feel their names are related to their “identity”? Why does casually using someone else’s name (not even posing as that person, which is creepy, but using a general first name which they happen to have) suddenly make them uncomfortable? I think Shakespeare said it right. Try it with your friends, and see what reactions it invokes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4663994014263153, "perplexity": 1787.9715865915773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423769.10/warc/CC-MAIN-20170721102310-20170721122310-00590.warc.gz"}
http://tex.stackexchange.com/questions/64215/make-part-in-scrartcl
# Make \part in scrartcl I am using the scrartcl of well-known KOMA-script (to those that know it well). As you know the article class has an environment called Abstract. Great, I like this and I use it. However, what I actually a making (and I have asked many questions here about the same and I am sorry I do not really answer questions because I am way more incompetent than the people that can actually answer them. Perhaps soon after I learn more!) What I have is an document called Puzzles.tex that is basically a template for each puzzle which is in a <description of puzzle>.tex file. Great! However, I also have different categories of these puzzles. What I would like is a page in between that states say: APPLES!! for problems related to apples. That could be named I. APPLES!! on the page and for the rest an empty pagestyle (or whatever). The next files are included and the title is a section and the parts are subsections. Summary: So, basically the problem is: How do I make a \part for the scrartcl class? I also would like to have it in the TOC. Another option would be to implement an abstract in the book class (and then do section -> chapter and subsection -> section), but using the book class seems weird as I probably have less than 30 pages. Any option that would give me what I need would be interesting, even the ones I did not mention. My ignorance can contribute to the fact that I miss better solutions. - The command \part is already implemented in the scrartcl document class, so you can just redefine how this command behaves. You can redefine \partheadstartvskip, \partheadmidtvskip, and \partheadstartvskip so that \part titles for scrartcl are typeset in their own page using, for example, the empty pagestyle. Redefining \raggedpart (used in scrartcl.cls to have raggedright title for parts) to be \centering, you will get centered titles: \documentclass{scrartcl} \renewcommand\raggedpart{\centering} \begin{document} \tableofcontents \part{Test Part} \section{Test Section} \end{document} An image of the first two pages: Since the page style for the modified \part pages was declared to be empty, it doesn't make sense to have a page numbers for the ToC entries associated to \part; to suppress the page number in the ToC, you can add to the preamble the following lines: \usepackage{etoolbox} \makeatletter \patchcmd{\l@part}{\hss#2}{}{}{} \makeatother If you want to suppress the word "Part" from the title, simply add to the preamble the line \renewcommand*\partformat{\thepart\autodot} Here's an example illustrating the suggested modifications: \documentclass{scrartcl} \usepackage{etoolbox} \makeatletter \patchcmd{\l@part}{\hss#2}{}{}{} \makeatother \renewcommand\raggedpart{\centering} \renewcommand*\partformat{\thepart\autodot} \begin{document} \tableofcontents \part{Apples} \section{Test Apple Section} \end{document} And the image of the first two pages: -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685482144355774, "perplexity": 1242.5160550452013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.analyzemath.com/trigonometry_questions/comple_suppl.html
Questions on Complementary and Supplementary Angles Multiple choice questions on supplementary and complementary angles with answers at the bottom of the page. Question Which two angles are supplementary? a) 30° and 60° b) 41° and 139° c) 45° and 145° d) 23° and 147° Question What is the complementary angle to angle B = π/3? a) π/2 b) π/3 c) π/4 d) π/6 Question Which two angles are complementary? a) 30° and 130° b) 20° and 160° c) 45° and 145° d) 1° and 89° Question Which of the following angles is supplementary to angle C = 2π/3? a) 4π/3 b) 2π/3 c) π/3 d) 3π/4 Question Which pairs of angles are complementary? a) 3π/4 and π/4 b) 5π/12 and π/12 c) π/4 and π/3 d) π/16 and π/8 Question Which two angles are supplementary? a) π and π/2 b) π/3 and 3π/2 c) π/7 and 6 π/7 d) π/8 and π/2 ANSWERS b) d) d) c) b) c)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139665126800537, "perplexity": 4590.843510279735}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00132.warc.gz"}
https://ageconsearch.umn.edu/record/54710
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract Using 2002/03 and 2005/2006 nationally representative household surveys, poverty headcount index declined from 38.8% respectively. The corresponding poverty gap ratio declined from 11.9% to 8.7%. While all Ugandans enjoyed an increase in consumption between the sample periods, the rate of growth in consumption was slightly higher for the lower percentiles. The led to a significant improvement in the distribution of income as demonstrated by the decline in the Gini coefficient from 0.428 to 0.408. The urban areas continue to have higher rates of inequality, nonetheless, they witnessed a significant improvement. the Gini coefficient declined from 0.483 in 2002/03 to 0.432 in 2005/06. Overall, the improvement in the distribution of income had a positive impact on poverty reduction. The poverty headcount in 2005/06 would have been higher by 1.2 percentage points if distribution of income had remained constant at the 2002/03 level. Using static decomposition techniques to examine the pattern of inequality in real consumption between regions and educational attachment of the household head increased over the sample periods. but inequality declined between rural/urban subgroups.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355318546295166, "perplexity": 2871.2910563771807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00093.warc.gz"}
https://www.wyzant.com/resources/answers/topics/math-problem
1,357 Answered Questions for the topic Math Problem 12/10/19 #### Trigonometry equation Find tan(x-y) if sin(x) = 1/5, cos(y) = 2/5, tan(x) < 0 tan(y) > 0 10/09/19 #### Precalculus word problem A rectangular piece of cardboard measuring 12 inches by 18 inches is to be made into a box w/ an open top by cutting equal-sized squares from each corner and folding up the sides. Let x represent... more 09/05/19 #### algebra 1 honors Peter hit twice the difference of the number of home runs Alice hit and 6. Altogether, they hit 18 home runs. How many home runs did each player hit that season? 08/16/19 #### Sasha is building a picture frame.the picture is 8 inches by 10 inches.how many inches of wood will sasha need? Sasha is building a picture frame.the picture is 8 inches by 10 inches.how many inches of wood will sasha need? 07/31/19 #### ELEMENTARY MATH I am thinking of 3-digit number.It can be formed by using three of the digits shown on the cards below.The digit in the tens place is 1 more than the digit in the ones place.The digit in the tens... more 06/11/19 #### You are going to plant a rectangular garden. There is a restriction that the length must be 1 foot less than twice the width. Choose a length and width based on the relationship given 05/01/19 #### Which of the following is the maximum weight of the items purchased ? A woman buys 8 items at the grocery store..the lightest item weighs 6 ounces ..the heaviest weighs 14 ounces ..the woman buys 4 items that weigh 10 ounces per item.. Which of the following is the... more 03/31/19 Neil can put together 1/5 of his puzzle pieces in one hour what fraction of the puzzle will be solved after three hours 03/26/19 #### Trigonometry problem Suppose you are solving a trigonometric equation for solutions over the interval [0,2π)​, and your work leads to 2x = 2π/3, 2π, 8π/3. What are the corresponding values of​ x? 03/03/19 What is 105 septillionth of 140 millimeters written as all numbers no words, just numbers. 01/17/19 #### Credit Card Balance Suppose you have a credit card balance of $16,500. the minimum payment is$338, and the annual percentage rate is 20.9%. If you make the minimum payment of \$338, how long will it take you to pay... more ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41828101873397827, "perplexity": 2196.9126214017583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00299.warc.gz"}
https://math.meta.stackexchange.com/questions/33593/downvoting-newbie-users-questions?noredirect=1
# Downvoting newbie user's questions I found this post today. Yeah, I agree that this question has been asked many times before but isn't 9 downvotes for a new user a bit too intense? I think mass downvoting newbies will only harm the site long term (as per new long term users goes). What I would suggest people to do is leave helpful comments and if OP isn't ready to fix their question, then only go for close vote rather than downvote if it's a new user. Why? The meaning of downvote isn't something that a new user could understand without any comments pointing out what's wrong, and, the second reason would be that people are more likely to turn long term contributors if they have a good first experience where they feel welcomed. Maybe there are reasons that I don't know on why it is a good idea to vote like this, if this is the case kindly explain why it would be as an answer. • Why do you connect the downvotes to CURED? I myself never add a downvote after -4. I agree that piling on downvotes serves no purpose, but don't attribute that phenomenon to CURED users. Please be careful about pointing fingers at a rather small group of people. I think your question is otherwise worth considering, had you started with "I encountered this question.... blah, blah" May 17 '21 at 19:02 • Hmm fair, I did it because I found the question from there. However, pointing fingers will not help. So I will remove that point May 17 '21 at 19:04 • I voted to close the question, for one, because it has been duplicated dozens of times, in various forms. But I'm not out to humiliate askers. May 17 '21 at 19:04 • I didn't mean this to be a question, I meant this to be a discussion(as shown with the tags)/ on something which happened because we should consider this event on how we proceed as a community. May 17 '21 at 19:06 • @amWhy: There is a point, although I agree that it is a bit lost in the case of new users (not necessarily new accounts). The more downvotes are added, the faster a user account will go into rate limits and be blocked automatically. But yes, with a new user this isn't usually the intended outcome. – Asaf Karagila Mod May 17 '21 at 20:16 • The downvotes here could also be because this is an elementary question. I have pointed out many times before that people often downvote because they think the OP misses a step that they definitely(in the eyes of the downvoter) ought to have seen earlier. This happens quite often and can skew elementary questions. The only way to prevent this occuring is to judge a question as a user who is only looking at the question's worth w.r.t site rules : on the FLIP side, we have questions that get upvoted only because users feel challenged, although they are PSQs. These come under the same bracket. May 17 '21 at 22:03 • Maybe it would have been better if the question had been closed as a duplicate. No one has even suggested in the comments on it, a duplicate target. May 18 '21 at 10:44 • Well, I was a new user at one point and I posted PSQs and I was digitally murdered for it. It was warranted. In a way I do support the tough love approach. I don't like the idea that the down-votee is totally clueless about the situation. May 19 '21 at 14:30 • @GEdgar Even though downvotes in such cases don't decrease reputation, I don't think it has no effect besides upsetting the user. For example, questions whose score is below -3 gets hidden from the front page. Asaf mentioned above that it makes more likely for the downvoted user to trigger rate limits and get auto-blocked. Also if a question has a negative score and no answers, it will be automatically deleted after 30 days. Though we can still discuss if these effects are desirable when it comes to new users. May 20 '21 at 17:12 • @user Downvotes are used to indicate that the question is not useful. What do you think : is every question useful? Are there better ways of saying a question is not useful than downvoting? Should a person not lose reputation for posting non-useful questions? Please be more precise in what you want to converse about, unless you don't wish to, in which case , each to their own. I want to converse since I disagree with you. May 24 '21 at 9:12 • @TeresaLisbon I would never downvote a question just because it is a duplicate. We cannot require from a user (especially a new one) to spend hours searching the archive of MSE. Moreover it will appear that in some sense almost any question is a duplicate. – user May 24 '21 at 9:59 • I'll often downvote and vote to close low effort questions simply because there are too many users that encourage low effort question behavior by answering them for quick points. The downvote signals that something is wrong and hopefully discourages the behavior. I don't like to do it though. May 24 '21 at 13:42 • @TeresaLisbon I don't love downvoting newish users who ask these low effort questions because of the human impact, but I prioritize the health of MSE over the feelings of a user not acting in good faith at the end of the day. May 26 '21 at 13:42 • @amWhy - to be honest, I think you're a little harsh in terms of down-votes, closing questions, deleting posts, etc. May 29 '21 at 21:45 The short answer is that upvotes and downvotes are not primarily intended to be instructive for the OP - they're intended to be a signal to other readers as to whether the content is worth reading or not. If you would like to educate a new user as to how to use the site, you are, of course, free to leave a comment explaining how to improve their post. To quote from the linked Q&A, Downvotes are, first and foremost, a content rating system. Rather than being a way of communicating with the poster, they are a way of communicating to future readers that a question or answer is not interesting or useful. If someone wants to leave a comment to communicate with the poster, they can always do so, independent of the voting system. Also, one of the basic principles of site moderation is to vote on content, not users. Avoiding downvoting or voting to close a new user's question is voting on users, not content. • The principle that downvotes are about content, not users, is a good one. However, new users are likely to be unfamiliar with this, and see downvotes on their posts as a personal insult. Moreover, once the downvotes have reached a certain threshold, it is already clear to other readers that the post is low quality. What good does it do for a new user's post to be sitting at $9$ downvotes? – Joe May 31 '21 at 19:52 • @Joe It's a little unclear to me how we prevent new users from taking downvotes personally. By that logic, we could never downvote any post from any new user no matter how bad. I do agree that pile-on downvotes seem a little excessive (since a score of, for example, -5 would signal that the content isn't useful just as well as -9 would), but I definitely don't agree that we should treat new users' posts any differently than we treat anyone else's (because that would be moderating users rather than content). May 31 '21 at 19:57 • @Joe A case could maybe be made for discouraging downvoting a question beyond, say, -5 votes (to discourage pile-on downvotes), but that standard would obviously have to be applied uniformly (not just to new users). May 31 '21 at 20:01 • I certainly have downvoted new users' posts. But that's when I think that the good that it does for the site in terms of filtering out bad content outweighs the discouragement that it might bring to that user. Also, I think it is sensible to treat new users differently. For instance, I might be forgiving if a new user for not using MathJax in their post, but I certainly wouldn't give as much leeway to an established user. I don't agree that voting should be based solely on the content of the post, even if that is an important aspect to it. But we can agree to disagree. – Joe May 31 '21 at 20:04 • @Joe Personally, I generally don't downvote for formatting problems (unless it's so severe that it can't be fixed by editing and it renders the post unreadable, in which case it makes the question unclear). In general, if a problem can be fixed by another user editing the post, it's better to try to salvage the post than to downvote or vote to close. If the post's problems can't be fixed by anyone other than the OP editing, then downvoting and/or voting to close could be appropriate. Jun 2 '21 at 13:56 • I get the sense that some people downvote because they think the question is "stupid" in the sense that the answer is obvious (to them) or that it is trivially answered by Googling or some other means. As a questioner in a new subject area though, this is problematic when I have tried and can't find the answer. More generally, there's the problem that downvotes are a noisy indicator of quality, but it only takes one or two before nobody will even look at your question anymore. Jul 9 '21 at 17:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2937253415584564, "perplexity": 819.6368428068951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00041.warc.gz"}
http://mathhelpforum.com/calculus/115329-when-two-waves-identical.html
# Thread: When two waves are identical? 1. ## When two waves are identical? Imagine a simple sin wave. Then strech/compress it on one or more intervals of the x axis. My question is: which mathematical branch should I use to say that the second wave is identical to the first? It's a signal processing problem: I need to categorize waves even if *parts* of a wave have a different temporal scale Thanks, Enri
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328945875167847, "perplexity": 1232.108278278096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00231-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/199155/finding-the-minimum-length-of-an-addition-chain
# Finding the minimum length of an addition chain It is known that for every positive integer $n$ there exists one or more optimal addition chains of minimum length. It is rumored that finding the length of the optimal chain is NP-hard, and the related Wikipedia article only provides methods to calculate relatively short chains but not the optimal chain. 1- What methods exist that can find the optimal addition chain for a given $n$? 2- How fast are these methods and how well do they scale with $n$? 3- Do methods exist that will only calculate the length of the optimal chain and could they be faster than ordinary methods? 4- Could a function that relates to addition chains be recursive? - I seem to remember that there is discussion of this topic in Volume 2 of Knuth The Art of Computer Programming, in connection with finding the optimal sequence of multiplications for calculating $x^n$ for given integer $n$. –  MJD Sep 19 '12 at 14:23 The wikipedia page provides a reference where NP-completeness is claimed proven. –  Sasha Sep 19 '12 at 16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6114957928657532, "perplexity": 343.6879966215301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00024-ip-10-180-212-252.ec2.internal.warc.gz"}
https://jascoinc.com/knowledgebase/enantioselective-separation-and-zebrafish-embryo-toxicity-of-insecticide-beta-cypermethrin/
# Enantioselective separation and zebrafish embryo toxicity of insecticide beta-cypermethrin April 23, 2020 ## Title Enantioselective separation and zebrafish embryo toxicity of insecticide beta-cypermethrin ## Author Chao Xu, Wenqing Tu, Chun Lou, Yingying Hong, Meirong Zhao 2010 ## Journal Journal of Environmental Sciences ## Abstract Enantioselectivity of chiral pollutants is receiving growing concern due to the difference in toxicology and environment fate between enantiomers. In this study, enantiomers of insecticide beta-cypermethrin (beta-CP) were separated on selected chiral column by HPLC, and the toxicity of enantiomers was evaluated using the zebrafish embryo-larval assays. The enantiomers of beta-CP were baseline separated on Chiralcel OD and Chiralpak AD columns and detected by circular dichroism (CD) at 236 nm. Better separation could be achieved at lower temperature (e.g., 20°C) and with lower levels of polar modifiers. Pure enantiomers were obtained on Chiralcel OD. The CD spectra of enantiomers were recorded. By comparing the elution order with a previous similar study, the absolute configuration of beta-CP enantiomers was determined. The individual enantiomers were used in zebrafish embryo test, and the results showed that beta-CP enantioselectively induced yolk sac edema, pericardial edema and crooked body. The 1R-cis-αS and 1R-trans-αS enantiomers showed strong developmental toxicities at concentration of 0.1 mg/L, while the 1S-cis-αR and 1S-trans-αR induced no malformations at higher concentration (e.g., 0.3 mg/L). The results suggest that the enantioselective toxicological effects of beta-CP should be considered when evaluating its ecotoxicological effects.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825007081031799, "perplexity": 21406.5730025953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00325.warc.gz"}
https://en.wikisource.org/wiki/Childe_Harold%27s_Pilgrimage
# The Works of Lord Byron (ed. Coleridge, Prothero)/Poetry/Volume 2/Childe Harold's Pilgrimage (Redirected from Childe Harold's Pilgrimage) Poetical Works OF LORD BYRON. Ianthe from an engraving by W. Finden, after a drawing by R. Westall, R.A.. The Works OF LORD BYRON. A NEW, REVISED AND ENLARGED EDITION, WITH ILLUSTRATIONS. Poetry. Vol. II. EDITED BY ERNEST HARTLEY COLERIDGE, M.A., LONDON: JOHN MURRAY, ALBEMARLE STREET. NEW YORK: CHARLES SCRIBNER'S SONS. 1899. PREFACE TO THE SECOND VOLUME. The text of the present edition of Childe Harold's Pilgrimage is based upon a collation of volume i. of the Library Edition, 1855, with the following MSS.: (i.) the original MS. of the First and Second Cantos, in Byron's handwriting [MS. M.]; (ii.) a transcript of the First and Second Cantos, in the handwriting of R. C. Dallas [D.]; (iii.) a transcript of the Third Canto, in the handwriting of Clara Jane Clairmont [C.]; (iv.) a collection of "scraps," forming a first draft of the Third Canto, in Byron's handwriting [MS.]; (v.) a fair copy of the first draft of the Fourth Canto, together with the MS. of the additional stanzas, in Byron's handwriting [MS. M.]; (vi.) a second fair copy of the Fourth Canto, as completed, in Byron's handwriting [D.]. The text of the First and Second Cantos has also been collated with the text of the First Edition of the First and Second Cantos (quarto, 1812); the text of the Third and of the Fourth Cantos with the texts of the First Editions of 1816 and 1818 respectively; and the text of the entire poem with that issued in the collected editions of 1831 and 1832. Considerations of space have determined the position and arrangement of the notes. Byron's notes to the First, Second, and Third Cantos, and Hobhouse's notes to the Fourth Canto are printed, according to precedent, at the end of each canto. Editorial notes are placed in square brackets. Notes illustrative of the text are printed immediately below the variants. Notes illustrative of Byron's notes or footnotes are appended to the originals or printed as footnotes. Byron's own notes to the Fourth Canto are printed as footnotes to the text. Hobhouse's "Historical Notes" are reprinted without addition or comment; but the numerous and intricate references to classical, historical, and archæological authorities have been carefully verified, and in many instances rewritten. In compiling the Introductions, the additional notes, and footnotes, I have endeavoured to supply the reader with a compendious manual of reference. With the subject-matter of large portions of the three distinct poems which make up the five hundred stanzas of Childe Harold's Pilgrimage every one is more or less familiar, but details and particulars are out of the immediate reach of even the most cultivated readers. The poem may be dealt with in two ways. It may be regarded as a repertory or treasury of brilliant passages for selection and quotation; or it may be read continuously, and with some attention to the style and message of the author. It is in the belief that Childe Harold should be read continuously, and that it gains by the closest study, reassuming its original freshness and splendour, that the text as well as Byron's own notes have been somewhat minutely annotated. In the selection and composition of the notes I have, in addition to other authorities, consulted and made use of the following editions of Childe Harold's Pilgrimage: i. Édition Classique, par James Darmesteter, Docteur-ès-lettres. Paris, 1882. ii. Byron's Childe Harold, edited, with Introduction and Notes, by H. F. Tozer, M.A. Oxford, 1885 (Clarendon Press Series). iii. Childe Harold's Pilgrimage, edited by the Rev. E. C. Everard Owen, M.A. London, 1897 (Arnold's British Classics). Particular acknowledgments of my indebtedness to these admirable works will be found throughout the volume. I have consulted and derived assistance from Professor Eugen Kölbing's exhaustive collation of the text of the two first cantos with the Dallas Transcript in the British Museum (Zur Textüberlieferung von Byron's Childe Harold, Cantos I., II. Leipsic, 1896); and I am indebted to the same high authority for information with regard to the Seventh Edition (1814) of the First and Second Cantos. (See Bemerkungen zu Byron's Childe Harold, Engl. Stud., 1896, xxi. 176-186.) I have again to record my grateful acknowledgments to Dr. Richard Garnett, C.B., Dr. A. S. Murray, F.R.S., Mr. R. E. Graves, Mr. E. D. Butler, F.R.G.S., and other officials of the British Museum, for constant help and encouragement in the preparation of the notes to Childe Harold. I desire to express my thanks to Dr. H. R. Mill, Librarian of the Royal Geographical Society; Mr. J. C. Baker, F.R.S., Keeper of the Herbarium and Library of the Royal Botanic Gardens, Kew; Mr. Horatio F. Brown (author of Venice, an Historical Sketch, etc.); Mr. P. A. Daniel, Mr. Richard Edgcumbe, and others, for valuable information on various points of doubt and difficulty. On behalf of the Publisher, I beg to acknowledge the kindness of his Grace the Duke of Richmond, in permitting Cosway's miniature of Charlotte Duchess of Richmond to be reproduced for this volume. I have also to thank Mr. Horatio F. Brown for the right to reproduce the interesting portrait of "Byron at Venice," which is now in his possession. April, 1899. INTRODUCTION TO THE FIRST AND SECOND CANTOS OF CHILDE HAROLD. The First Canto of Childe Harold was begun at Janina, in Albania, October 31, 1809, and the Second Canto was finished at Smyrna, March 28, 1810. The dates were duly recorded on the MS.; but in none of the letters which Byron wrote to his mother and his friends from the East does he mention or allude to the composition or existence of such a work. In one letter, however, to his mother (January 14, 1811, Letters, 1898, i. 308), he informs her that he has MSS. in his possession which may serve to prolong his memory, if his heirs and executors "think proper to publish them;" but for himself, he has "done with authorship." Three months later the achievement of Hints from Horace and The Curse of Minerva persuaded him to give "authorship" another trial; and, in a letter written on board the Volage frigate (June 28, Letters, 1898, i. 313), he announces to his literary Mentor, R. C. Dallas, who had superintended the publication of English Bards, and Scotch Reviewers, that he has "an imitation of the Ars Poetica of Horace ready for Cawthorne." Byron landed in England on July 2, and on the 15th Dallas "had the pleasure of shaking hands with him at Reddish's Hotel, St. James's Street." (Recollections of the Life of Lord Byron, 1824, p. 103). There was a crowd of visitors, says Dallas, and no time for conversation; but the Imitation was placed in his hands. He took it home, read it, and was disappointed. Disparagement was out of the question; but the next morning at breakfast Dallas ventured to express some surprise that he had written nothing else. An admission or confession followed that "he had occasionally written short poems, besides a great many stanzas in Spenser's measure, relative to the countries he had visited." "They are not," he added, "worth troubling you with, but you shall have them all with you if you like." "So," says Dallas, "came I by Childe Harold. He took it from a small trunk, with a number of verses." To this request Byron somewhat reluctantly acceded (August 21); and a few days later (August 25) he informs Dallas that he has sent him "exordiums, annotations, etc., for the forthcoming quarto," and has written to Murray, urging him on no account to show the MS. to Juvenal, that is, Gifford. But Gifford, as a matter of course, had been already consulted, had read the First Canto, and had advised Murray to publish the poem. Byron was, or pretended to be, furious; but the solid fact that Gifford had commended his work acted like a charm, and his fury subsided. On the fifth of September (Letters, 1898, ii. 24, note) he received from Murray the first proof, and by December 14 "the Pilgrimage was concluded," and all but the preface had been printed and seen through the press. The original draft of the poem, which Byron took out of "the little trunk" and gave to Dallas, had undergone considerable alterations and modifications before this date. Both Dallas and Murray took exception to certain stanzas which, on personal, or patriotic, or religious considerations, were provocative and objectionable. They were apprehensive, not only for the sale of the book, but for the reputation of its author. Byron fought his ground inch by inch, but finally assented to a compromise. He was willing to cut out three stanzas on the Convention of Cintra, which had ceased to be a burning question, and four more stanzas at the end of the First Canto, which reflected on the Duke of Wellington, Lord Holland, and other persons of less note. A stanza on Beckford in the First Canto, and two stanzas in the second on Lord Elgin, Thomas Hope, and the "Dilettanti crew," were also omitted. Stanza ix. of the Second Canto, on the immortality of the soul, was recast, and "sure and certain" hopelessness exchanged for a pious, if hypothetical, aspiration. But with regard to the general tenor of his politics and metaphysics, Byron stood firm, and awaited the issue. No further alterations were made in the text of the poem; but an eleventh edition of Childe Harold, Cantos I., II., was published in 1819. The demerits of Childe Harold lie on the surface; but it is difficult for the modern reader, familiar with the sight, if not the texture, of "the purple patches," and unattracted, perhaps demagnetized, by a personality once fascinating and always "puissant," to appreciate the actual worth and magnitude of the poem. We are "o'er informed;" and as with Nature, so with Art, the eye must be couched, and the film of association removed, before we can see clearly. But there is one characteristic feature of Childe Harold which association and familiarity have been powerless to veil or confuse—originality of design. "By what accident," asks the Quarterly Reviewer (George Agar Ellis), "has it happened that no other English poet before Lord Byron has thought fit to employ his talents on a subject so well suited to their display?" The question can only be answered by the assertion that it was the accident of genius which inspired the poet with a "new song." Childe Harold's Pilgrimage had no progenitors, and, with the exception of some feeble and forgotten imitations, it has had no descendants. The materials of the poem; the Spenserian stanza, suggested, perhaps, by Campbell's Gertrude of Wyoming, as well as by older models; the language, the metaphors, often appropriated and sometimes stolen from the Bible, from Shakespeare, from the classics; the sentiments and reflections coeval with reflection and sentiment, wear a familiar hue; but the poem itself, a pilgrimage to scenes and cities of renown, a song of travel, a rhythmical diorama, was Byron's own handiwork—not an inheritance, but a creation. Childe Harold's Pilgrimage was reviewed, or rather advertised, by Dallas, in the Literary Panorama for March, 1812. To the reviewer's dismay, the article, which appeared before the poem was out, was shown to Byron, who was paying a short visit to his old friends at Harrow. Dallas quaked, but "as it proved no bad advertisement," he escaped censure. "The blunder passed unobserved, eclipsed by the dazzling brilliancy of the object which had caused it." (Recollections, p. 221). Of the greater reviews, the Quarterly (No. xiii., March, 1812) was published on May 12, and the Edinburgh (No. 38, June, 1812) was published on August 5, 1812. NOTES ON THE MSS. OF CHILDE HAROLD. I. The original MS. of the First and Second Cantos of Childe Harold, consisting of ninety-one folios bound up with a single bluish-grey cover, is in the possession of Mr. Murray.[1] A transcript from this MS., in the handwriting of R. C. Dallas, with Byron's autograph corrections, is preserved in the British Museum (Egerton MSS., No. 2027). The first edition (4to) was printed from the transcript as emended by the author. The "Addition to the Preface" was first published in the Fourth Edition. The following notes in Byron's handwriting are on the outside of the cover of the original MS.:— "Byron—Joannina in Albania Begun Oct. 31st. 1809. Concluded, Canto 2nd., Smyrna, March 28th, 1810. Byron. "The marginal remarks pencilled occasionally were made by two friends who saw the thing in MS. sometime previous to publication. 1812." On the verso of the single bluish-grey cover, the lines, "Dear Object of Defeated Care," have been inscribed. They are entitled, "Written beneath the picture of J. U. D." They are dated, "Byron, Athens, 1811." The following notes and memoranda have been bound up with the MS.:— "Henry Drury, Harrow. Given me by Lord Byron. Being his original autograph MS. of the first canto of Childe Harold, commenced at Joannina in Albania, proceeded with at Athens, and completed at Smyrna." "How strange that he did not seem to know that the volume contains Cantos I., II., and so written by Ld. B.!" [Note by J. Murray.] "Sir,—I desire that you will settle any account for Childe Harold with Mr. R. C. Dallas, to whom I have presented the copyright. "Yr. obedt. Servt., "Byron "To Mr. John Murray, "Bookseller, "32, Fleet Street, "London, Mar. 17, 1812." "Received, April 1st, 1812, of Mr. John Murray, the sum of one hundred pounds 15/8, being my entire half-share of the profits of the 1st Edition of Childe Harold's Pilgrimage 4to. "R. C. Dallas. "£ 101: 15: 8. ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \ \end{matrix}}\right.}}$ Mem.: This receipt is for the above sum, in part of five hundred guineas agreed to be paid by Mr. Murray for the Copyright of Childe Harold's Pilgrimage." The following poems are appended to the MS. of the First and Second Cantos of Childe Harold: 1. "Written at Mrs. Spencer Smith's request, in her memorandum-book— "'As o'er the cold sepulchral stone.'" 2. "Stanzas written in passing the Ambracian Gulph, November 14, 1809." 3. "Written at Athens, January 16th, 1810— "'The spell is broke, the charm is flown.'" 4. "Stanzas composed October 11, 1809, during the night in a thunderstorm, when the guides had lost the road to Zitza, in the range of mountains formerly called Pindus, in Albania." On a blank leaf bound up with the MS. at the end of the volume, Byron wrote— "Dear Ds.,—This is all that was contained in the MS., but the outside cover has been torn off by the booby of a binder. "Yours ever, "B." The volume is bound in smooth green morocco, bordered by a single gilt line. "MS." in gilt lettering is stamped on the side cover. II. Collation of First Edition, Quarto, 1812, with MS. of the First Canto. The MS. numbers ninety-one stanzas, the First Edition ninety-three stanzas. Omissions from the MS. Stanza vii. "Of all his train there was a henchman page,"— „ viii. "Him and one yeoman only did he take,"— „ xxii. "Unhappy Vathek! in an evil hour,"— „ xxv. "In golden characters right well designed,"— „ xxvii. "But when Convention sent his handy work,"— „ xxviii. "Thus unto Heaven appealed the people: Heaven,"— „ lxxxviii. "There may you read with spectacles on eyes,"— „ lxxxix. "There may you read—Oh, Phœbus, save Sir John,"— „ xc. "Yet here of Vulpes mention may be made,"— Insertions in the First Edition. Stanza i. "Oh, thou! in Hellas deemed of heavenly birth,"— „ viii. "Yet oft-times in his maddest mirthful mood,"— „ ix. "And none did love him!—though to hall and bower,"— „ xliii. "Oh, Albuera! glorious field of grief!"— „ lxxxv. "Adieu, fair Cadiz! yea, a long adieu!"— „ lxxxvi. "Such be the sons of Spain, and strange her Fate,"— „ lxxxviii. "Flows there a tear of Pity for the dead?"— „ lxxxix. "Not yet, alas! the dreadful work is done,"— „ xc. "Not all the blood at Talavera shed,"— „ xci. "And thou, my friend!—since unavailing woe,"— „ xcii. "Oh, known the earliest, and esteemed the most,"— The MS. of the Second Canto numbers eighty stanzas; the First Edition numbers eighty-eight stanzas. Omissions from the MS. Stanza viii. "Frown not upon me, churlish Priest! that I,"— „ xiv. "Come, then, ye classic Thieves of each degree,"— „ xv. "Or will the gentle Dilettanti crew,"— „ lxiii. "Childe Harold with that Chief held colloquy,"— Insertions in the First Edition. Stanza viii. "Yet if, as holiest men have deemed, there be,"— „ ix. "There, Thou! whose Love and Life together fled,'— „ xv. "Cold is the heart, fair Greece! that looks on Thee,"— „ lii. "Oh! where, Dodona! is thine agéd Grove?"— „ lxiii. "Mid many things most new to ear and eye,"— „ lxxx. "Where'er we tread 'tis haunted, holy ground,"— „ lxxxiii. "Let such approach this consecrated Land,"— „ lxxxiv. "For thee, who thus in too protracted song,"— „ lxxxv. "Thou too art gone, thou loved and lovely one!"— „ lxxxvi. "Oh! ever loving, lovely, and beloved!"— „ lxxxvii. "Then must I plunge again into the crowd,"— „ lxxxviii. "What is the worst of woes that wait on Age?"— Additions to the Seventh Edition, 1814. The Second Canto, in the first six editions, numbers eighty-eight stanzas; in the Seventh Edition the Second Canto numbers ninety-eight stanzas. The Dedication, To Ianthe. Stanza xxvii. "More blest the life of godly Eremite,"— „ lxxvii. "The city won for Allah from the Giaour,"— „ lxxviii. "Yet mark their mirth, ere Lenten days begin,"— „ lxxix. "And whose more rife with merriment than thine,"— „ lxxx. "Loud was the lightsome tumult on the shore,"— „ lxxxi. "Glanced many a light Caique along the foam,"— „ lxxxii. "But, midst the throng in merry masquerade,"— „ lxxxiii. "This must he feel, the true-born son of Greece,"— „ lxxxix. "The Sun, the soil—but not the slave, the same,"— „ xc. "The flying Mede, his shaftless broken bow,"— ITINERARY. Note to "Itinerary." [For dates and names of towns and villages, see Travels in Albania, and other Provinces of Turkey, in 1809 and 1810, by the Right Hon. Lord Broughton, G.C.B. [John Cam Hobhouse], two volumes, 1858. The orthography is based on that of Longmans' Gazetteer of the World, edited by G. G. Chisholm, 1895. The alternative forms are taken from Heinrich Kiepert's Carte de l'Épire et de la Thessalie, Berlin, 1897, and from Dr. Karl Peucker's Griechenland, Wien, 1897.] CONTENTS OF VOL. II. CHILDE HAROLD'S PILGRIMAGE. PAGE Preface to Vol. II. of the Poems v Introduction to the First and Second Cantos ix Notes on the MSS. of the First and Second Cantos xvi Itinerary xxi Preface to the First and Second Cantos 3 To Ianthe 11 Canto the First 15 Notes 85 Canto the Second 97 Notes 165 Introduction to Canto the Third 211 Canto the Third 215 Notes 291 Introduction to Canto the Fourth 311 Original Draft, etc., of Canto the Fourth 316 Dedication 321 Canto the Fourth 327 Historical Notes by J. C. Hobhouse 465 LIST OF ILLUSTRATIONS. 1 Ianthe (Lady Charlotte Harley), from an Engraving by W. Finden, after a Drawing by R. Westall, R.A. Frontispiece 2 The Duchess of Richmond, from a Miniature by Richard Cosway, in the Possession of His Grace the Duke of Richmond and Gordon, K.G. To face p.228 3 Portrait of Lord Byron at Venice, from a Painting in Oils by Ruckard, in the Possession of Horatio F. Brown, Esq. „„326 4 The Horses of St. Mark, from a Photograph by Alinari „„338 5 S. Pantaleon, from a Woodcut published at Cremona in 1493 „„340 6 The Dying Gaul, from the Original in the Museum of the Capitol „„432 CHILDE HAROLD'S PILGRIMAGE. A ROMAUNT. "L'univers est une espèce de livre, dont on n'a lu que la première page quand on n'a vu que son pays. J'en ai feuilleté un assez grand nombre, que j'ai trouvé également mauvaises. Cet examen ne m'a point été infructueux. Je haïssais ma patrie. Toutes les impertinences des peuples divers, parmi lesquels j'ai vécu, m'ont réconcilié avec elle. Quand je n'aurais tiré d'autre bénéfice de mes voyages que celui-là, je n'en regretterais ni les frais ni les fatigues."—Le Cosmopolite, ou, le Citoyen du Monde, par Fougeret de Monbron. Londres, 1753. PREFACE.[2] [TO THE FIRST AND SECOND CANTOS.] The following poem was written, for the most part, amidst the scenes which it attempts[3] to describe. It was begun in Albania; and the parts relative to Spain and Portugal were composed from the author's observations in those countries. Thus much it may be necessary to state for the correctness of the descriptions. The scenes attempted to be sketched are in Spain, Portugal, Epirus, Acarnania and Greece. There, for the present, the poem stops: its reception will determine whether the author may venture to conduct his readers to the capital of the East, through Ionia and Phrygia: these two cantos are merely experimental. A fictitious character is introduced for the sake of giving some connection to the piece; which, however, makes no pretension to regularity. It has been suggested to me by friends, on whose opinions I set a high value,[4] that in this fictitious character, "Childe Harold," I may incur the suspicion of having intended some real personage: this I beg leave, once for all, to disclaim—Harold is the child of imagination, for the purpose I have stated. In some very trivial particulars, and those merely local, there might be grounds for such a notion;[5] but in the main points, I should hope, none whatever.[6] It is almost superfluous to mention that the appellation "Childe,"[7] as "Childe Waters," "Childe Childers," etc., is used as more consonant with the old structure of versification which I have adopted. The "Good Night" in the beginning of the first Canto, was suggested by Lord Maxwell's "Good Night"[8] in the Border Minstrelsy, edited by Mr. Scott. With the different poems[9] which have been published on Spanish subjects, there maybe found some slight coincidence[10] in the first part, which treats of the Peninsula, but it can only be casual; as, with the exception of a few concluding stanzas, the whole of the poem was written in the Levant. The stanza of Spenser, according to one of our most successful poets, admits of every variety. Dr. Beattie makes the following observation:— "Not long ago I began a poem in the style and stanza of Spenser, in which I propose to give full scope to my inclination, and be either droll or pathetic, descriptive or sentimental, tender or satirical, as the humour strikes me; for, if I mistake not, the measure which I have adopted admits equally of all these kinds of composition."[11] Strengthened in my opinion by such authority, and by the example of some in the highest order of Italian poets, I shall make no apology for attempts at similar variations in the following composition;[12] satisfied that, if they are unsuccessful, their failure must be in the execution, rather than in the design sanctioned by the practice of Ariosto, Thomson, and Beattie. London, February, 1812. I have now waited till almost all our periodical journals have distributed their usual portion of criticism. To the justice of the generality of their criticisms I have nothing to object; it would ill become me to quarrel with their very slight degree of censure, when, perhaps, if they had been less kind they had been more candid. Returning, therefore, to all and each my best thanks for their liberality, on one point alone I shall venture an observation. Amongst the many objections justly urged to the very indifferent character of the "vagrant Childe" (whom, notwithstanding many hints to the contrary, I still maintain to be a fictitious personage), it has been stated, that, besides the anachronism, he is very unknightly, as the times of the Knights were times of Love, Honour, and so forth.[13] Now it so happens that the good old times, when "l'amour du bon vieux terns, l'amour antique," flourished, were the most profligate of all possible centuries. Those who have any doubts on this subject may consult Sainte-Palaye, passim, and more particularly vol. ii. p. 69.[14] The vows of chivalry were no better kept than any other vows whatsoever; and the songs of the Troubadours were not more decent, and certainly were much less refined, than those of Ovid. The "Cours d'Amour, parlemens d'amour, ou de courtoisie et de gentilesse" had much more of love than of courtesy or gentleness. See Rolland[15] on the same subject with Sainte-Palaye. Whatever other objection may be urged to that most unamiable personage Childe Harold, he was so far perfectly knightly in his attributes—"No waiter, but a knight templar."[16] By the by, I fear that Sir Tristrem and Sir Lancelot were no better than they should be, although very poetical personages and true knights, "sans peur," though not "sans réproche." If the story of the institution of the "Garter" be not a fable, the knights of that order have for several centuries borne the badge of a Countess of Salisbury, of indifferent memory. So much for chivalry. Burke need not have regretted that its days are over, though Marie-Antoinette was quite as chaste as most of those in whose honour lances were shivered, and knights unhorsed.[17] Before the days of Bayard, and down to those of Sir Joseph Banks[18] (the most chaste and celebrated of ancient and modern times) few exceptions will be found to this statement; and I fear a little investigation will teach us not to regret these monstrous mummeries of the middle ages. I now leave "Childe Harold" to live his day such as he is; it had been more agreeable, and certainly more easy, to have drawn an amiable character. It had been easy to varnish over his faults, to make him do more and express less, but he never was intended as an example, further than to show, that early perversion of mind and morals leads to satiety of past pleasures and disappointment in new ones, and that even the beauties of nature and the stimulus of travel (except ambition, the most powerful of all excitements) are lost on a soul so constituted, or rather misdirected. Had I proceeded with the Poem, this character would have deepened as he drew to the close; for the outline which I once meant to fill up for him was, with some exceptions, the sketch of a modern Timon,[19] perhaps a poetical Zeluco.[20] 1. "The first and second cantos of Childe Harold were written in separate portions by the noble author. They were afterwards arranged for publication; and when thus arranged, the whole was copied. This copy was placed in Lord Byron's hands, and he made various alterations, corrections, and large additions. These, together with the notes, are in his Lordship's own handwriting. The manuscript thus corrected was sent to the press, and was printed under the direction of Robt. Chas. Dallas, Esq., to whom Lord Byron had given the copyright of the poem. The MS., as it came from the printers, was preserved by Mr. Dallas, and is now in the possession of his son, the Rev. Alex. Dallas." [See Dallas Transcript, p. 1. Mus. Brit. Bibl. Egerton, 2027. Press 526. H. T.] 3. Professes to describe.—[MS. B.M.] 4. —— that in the fictitious character of "Childe Harold" I may incur the suspicion of having drawn "from myself." This I beg leave once for all to disclaim. I wanted a character to give some connection to the poem, and the one adopted suited my purpose as well as any other.—[MS. B.M.] 5. Such an idea.—[MS. B.M.] 6. My readers will observe that where the author speaks in his own person he assumes a very different tone from that of "The cheerless thing, the man without a friend," at least, till death had deprived him of his nearest connections. I crave pardon for this Egotism, which proceeds from my wish to discard any probable imputation of it to the text.—[MS. B.M.] 7. ["In the 13th and 14th centuries the word 'child,' which signifies a youth of gentle birth, appears to have been applied to a young noble awaiting knighthood, e.g. in the romances of Ipomydon, Sir Tryamour, etc. It is frequently used by our old writers as a title, and is repeatedly given to Prince Arthur in the Faërie Queene" (N. Eng. Dict., art. "Childe"). Byron uses the word in the Spenserian sense, as a title implying youth and nobility.] 8. [John, Lord Maxwell, slew Sir James Johnstone at Achmanhill, April 6, 1608, in revenge for his father's defeat and death at Dryffe Sands, in 1593. He was forced to flee to France. Hence his "Good Night." Scott's ballad is taken, with "some slight variations," from a copy in Glenriddel's MSS.—Minstrelsy of the Scottish Border, 1810, i. 290-300.] 9. [Amongst others, The Battle of Talavera, by John Wilson Croker, appeared in 1809; The Vision of Don Roderick, by Walter Scott, in 1811; and Portugal, a Poem, by Lord George Grenville, in 1812.] 10. Some casual coincidence.—[MS. B.M.] 11. Beattie's Letters. [See letter to Dr. Blacklock, September 22, 1766 (Life of Beattie, by Sir W. Forbes, 1806, i. 89).] 12. Satisfied that their failure.—[MS. B.M.] 13. [See Quarterly Review, March, 1812, vol. vii. p. 191: "The moral code of chivalry was not, we admit, quite pure and spotless, but its laxity on some points was redeemed by the noble spirit of gallantry which courted personal danger in the defence of the sovereign ... of women because they are often lovely, and always helpless; and of the priesthood... Now, Childe Harold, if not absolutely craven and recreant, is at least a mortal enemy to all martial exertion, a scoffer at the fair sex, and, apparently, disposed to consider all religions as different modes of superstition." The tone of the review is severer than the Preface indicates. Nor does Byron attempt to reply to the main issue of the indictment, an unknightly aversion from war, but rides off on a minor point, the licentiousness of the Troubadours.] 14. [See Mémoires sur l'Ancienne Chevalerie, par M. De la Curne de Sainte-Palaye, Paris, 1781: "Qu'on lise dans l'auteur du roman de Gérard de Roussillon, en Provençal, les détails très-circonstanciés dans lesquels il entre sur la réception faite par le Comte Gérard à l'ambassadeur du roi Charles; on y verra des particularités singulières qui donnent une étrange idée des mœurs et de la politesse de ces siècles aussi corrompus qu'ignorans." (ii. 69). See, too, ibid., ante, p. 65: "Si l'on juge des mœurs d'un siècle par les écrits qui nous en sont restés, nous serons en droit de juger que nos ancêtres observèrent mal les loix que leur prescrivirent la décence et l'honnêteté."] 15. [See Recherches sur les Prérogatives des Dames chez les Gaulois sur les Cours d'Amours, par M. le Président Rolland [d'Erceville], de l'Académie d'Amiens. Paris, 1787, pp. 18-30, 117, etc.] 16. [The phrase occurs in The Rovers, or the Double Arrangement (Poetry of the Anti-Jacobin, 1854, p. 199), by J. Hookham Frere, a skit on the "moral inculcated by the German dramas—the reciprocal duties of one or more husbands to one or more wives." The waiter at the Golden Eagle at Weimar is a warrior in disguise, and rescues the hero, who is imprisoned in the abbey of Quedlinburgh.] 17. ["But the age of chivalry is gone—the unbought grace of life, the cheap defence of nations," etc. (Reflections on the Revolution in France, by the Right Hon. Edmund Burke, M.P., 1868, p. 89).] 18. [Passages relating to the Queen of Tahiti, in Hawkesworth's Voyages, drawn from journals kept by the several commanders, and from the papers of Joseph Banks, Esq. (1773, ii. 106), gave occasion to malicious and humorous comment. (See An Epistle from Mr. Banks, Voyager, Monster-hunter, and Amoroso, To Oberea, Queen of Otaheite, by A.B.C.) The lampoon, "printed at Batavia for Jacobus Opani" (the Queen's Tahitian for "Banks"), was published in 1773. The authorship is assigned to Major John Scott Waring (1747-1819).] 19. [Compare Childish Recollections: Poetical Works, 1898, i. 84, var. i.— "Weary of love, of life, devour'd with spleen, I rest a perfect Timon, not nineteen."] 20. [John Moore (1729-1802), the father of the celebrated Sir John Moore, published Zeluco. Various views of Human Nature, taken from Life and Manners, Foreign and Domestic, in 1789. Zeluco was an unmitigated scoundrel, who led an adventurous life; but the prolix narrative of his villanies does not recall Childe Harold. There is, perhaps, some resemblance between Zeluco's unbridled childhood and youth, due to the indulgence of a doting mother, and Byron's early emancipation from discipline and control.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47810664772987366, "perplexity": 13152.249369962658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00353-ip-10-171-10-108.ec2.internal.warc.gz"}
http://murtagh.mhmedical.com/content.aspx?bookid=1522&sectionid=116037152
Chapter 77 ### Introduction All those, therefore, who have cataract see the light more or less, and by this we distinguish cataract from amaurosis and glaucoma, for persons affected with these complaints do not perceive the light at all. Paul of Aegina (615–690) The commonest cause of visual dysfunction is a simple refractive error. However, there are many causes of visual failure, including the emergency of sudden blindness, a problem that requires a sound management strategy. Apart from migraine, virtually all cases of sudden loss of vision require urgent treatment. The ‘white’ eye or uninflamed eye presents a different clinical problem from the red or inflamed eye.1 The ‘white’ eye is painless and usually presents with visual symptoms and it is in the ‘white’ eye that the majority of blinding conditions occur. ### Criteria for blindness and driving This varies from country to country. The WHO defines blindness as ‘best visual acuity less than 3/60’, while in Australia eligibility for the blind pension is ‘bilateral corrected visual acuity less than 6/60 or significant visual field loss’ (e.g. a patient can have 6/6 vision but severely restricted fields caused by chronic open-angle glaucoma). The minimum standard for driving is 6/12 (Snellen system). Key facts and checkpoints • The commonest cause of blindness in the world is trachoma. The other major causes of gradual blindness are cataracts, onchocerciasis and vitamin A deficiency.2 • In Western countries the commonest causes are senile cataract, glaucoma, age-related macular degeneration, trauma and the retinopathy of diabetes mellitus.2 • The commonest causes of sudden visual loss are transient occlusion of the retinal artery (amaurosis fugax) and migraine.3 • ‘Flashing lights’ are caused by traction on the retina and may have a serious connotation: the commonest cause is vitreoretinal traction, which is a classic cause of retinal detachment. • The presence of floaters or ‘blobs’ in the visual fields indicates pigment in the vitreous: causes include vitreous haemorrhage and vitreous detachment. • Posterior vitreous detachment is the commonest cause of the acute onset of floaters, especially with advancing age. • Retinal detachment has a tendency to occur in short-sighted (myopic) people. • Suspect a macular abnormality where objects look smaller or straight lines are bent or distorted. ### The clinical approach #### History The history should carefully define the onset, progress, duration, offset and the extent of visual loss. An accurate history is important because a longstanding visual defect may only just have been noticed by the patient, especially if it is unilateral. Two questions need to be answered: • Is the loss unilateral or bilateral? • Is the onset acute, or gradual and progressive? The distinction between central and peripheral visual loss is useful. Central visual loss presents as impairment of visual acuity and implies defective retinal image formation (through refractive error or opacity in the ocular ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### Murtagh Collection Full Site: One-Year Subscription Connect to a suite of general practice resources from one of the most influential authors in the field. Learn the breadth of general practice, including up-to-date information on diagnosis and treatment, as well as key clinical skills like communication.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23489263653755188, "perplexity": 7242.001909226766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186353.38/warc/CC-MAIN-20170322212946-00170-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.exampaper.com.sg/little-miss-loi-the-science-tutor/gce-o-level-2011-oct-nov-physics-5058-mcq-paper-1-suggested-answers-solutions
2011 Fri 11 Nov GCE O Level 2011 Oct/Nov Physics 5058 (MCQ) Paper 1 Suggested Answers & Solutions (96) Posted at 9:38 am Hello again everyone, After having taken possession a certain booklet, I’m sitting in my sister’s car, looking up at the bright blue morning sky, while sending out this series of codes: Q1-10 CCBCB AACCB Q11-20 DCBDB CBADA Q21-30 ACABC BCBAB Q31-40 DCBDA BDDAD Note: Answer for Q33 has been changed to B. Sorry for the misinformation! Do they tally with yours? Update: The list of workings and explanations for each of the answers (where applicable) have been compiled (along with the questions at the end)! You may access here by clicking the button (if you haven’t yet done so). 2011 O-Level October/November Physics 5058 Paper 1 Suggested Solutions Hopefully this will help in some way to settle some of the debates regarding some of the answers (and hopefully not create more in the process!) Good luck for those of you with your remaining Science MCQ papers! ALMOST THERE!!! Revision Exercise To show that you have understood what Miss Loi just taught you, you must: 1. YQ says 2011 Nov 11 Fri 10:05am 1 for ques 11, I thought that the answer should be B? cuz only a stretched spring will have elastic potential energy... so at Q is potential and at P is kinetic? 2. Wong Xin Hui says 2011 Nov 11 Fri 10:17am 2 Hello. Thank you for providing answers(: However, I thought for question 6, the answer should be A? Only when all the arrows point in one direction, there will be no resultant force. As the nail does not move, so, that means there is no resultant force, right? (: 3. HP says 2011 Nov 11 Fri 10:20am 3 Why is Q6 D? Shouldn't it be A? Since the nail does not move. 4. 2011 Nov 11 Fri 10:26am 4 Hi miss Loi. Question 37 should be D. 5. Rachel says 2011 Nov 11 Fri 10:30am 5 Can you relook Q30? I thought current passes through only one resistor. And also for Q37, cos I still get answer D. • 2011 Nov 11 Fri 10:48am 5.1 @Rachel: For Q30, answer is B as the current comes out from the alternating current supply, it can go through all four resistors if it travels in the clockwise direction, but when in the anti-clockwise direction the current cannot pass through the resistors which are along the same route as the diode. Hence only 2 resistors carry the current in 2 directions, and the resistors adjacent to the diode carry current in ONE direction only As for Q37, the answer is amended to D - thanks for highlighting! *drawing a diagram to illustrate now* ... 6. fb says 2011 Nov 11 Fri 10:38am 6 For qn37, I calculated the voltage and got D as my answer. How to calculate the voltage for qn37? • 2011 Nov 11 Fri 11:42am 6.1 @fb: A lot of people may think that B is the answer (which I did the same as well!!) in order to achieve a higher potential on the first transformer's output. Here's the diagram for Q37 which I hope will be sufficient to illustrate how the voltages at various stages are obtained: 7. fb says 2011 Nov 11 Fri 10:42am 7 Can you explain why qn36 is B? • 2011 Nov 11 Fri 11:47am 7.1 @fb: Induced emf is produced when a conductor (wire) experiences a rate of change of magnetic flux linkage (Faraday's Law). In layman terms, the wire will only have emf if it "feels" a changing magnetic field. At P and Q, the magnet "stops" to make a U-turn. This means when stationary, it experiences no change in magnetic field, hence no emf. 8. hi says 2011 Nov 11 Fri 10:48am 8 hi:) thx for the ans but i i ask why qn 37 is b instead of d?? 9. Jay says 2011 Nov 11 Fri 10:51am 9 Can someone do an explanation for 37? Using Np/Ns = Vp/Vs, my best guess is B Vp/Vs = 12V/6V = 2 So... Np/Ns must be 1000/500 = 2 ~ 10. hy says 2011 Nov 11 Fri 10:51am 10 should question 33 be B? When a person touches the live wire the fuse will not blow as a person has a very high resistant. Hence, the current flowing should be low. 11. 2011 Nov 11 Fri 10:57am 11 is this same as 5057 paper? 12. anonymous says 2011 Nov 11 Fri 11:20am 12 if the answers here are right, my mark should be 37/40 which should be alright. I have a question though. For question 39, my friends say the answer is option B but i wrote option D which tallies with your answer. Can you explain question 39? I don't really understand. And for question 36, why is the answer B? Please enlighten me. THANKS! 13. IP says 2011 Nov 11 Fri 11:33am 13 For question 23, answer is A because the ray bends inwards aft passing through the lens? • 2011 Nov 11 Fri 6:32pm 13.1 @IP: Actually no - more that the rays need to connect through the focal length and image and the object in a straight line. The diagram in A fits the model of a ray diagram for a magnifying glass image from a converging lens, hence it is a converging lens. But well at least you chose A in the end - you got the mark and now let's learn from this! • Claudia Tan @CLOUDmonsterr replied to IP’s comment 2011 Nov 11 Fri 11:17pm 13.2 @IP: Hi Ms Loi, can you please relook the answer for Q3. My answer was (C) 16m. There are two parts of the graph where the trolley was moving with constant speed, and the total distance travelled is (7-4)(4)+(1/2)(9-7)(4)=16, isn't it? • 2011 Nov 12 Sat 3:24pm Errr ... the trolley cannot be travelling at constant speed at the '(1/2)(9-7)(4)' part since its speed clearly decreases from 4 m/s to 0 m/s from the 7th to the 9th second. This is constant deceleration! 14. 2011 Nov 11 Fri 11:38am 14 Can you relook qn 33? Isn't it A? 15. ces @nil says 2011 Nov 11 Fri 11:56am 15 16. Hazle says 2011 Nov 11 Fri 11:58am 16 Hi Miss Loi. For question 38, why will the brightness of L1 remain the same(option D)? Since current flowing through L1 decreases when the resistance of the LDR decreases. • 2011 Nov 11 Fri 12:05pm 16.1 @Hazle: The brightness of the lamp is dependent on the potential difference across the lamp, which is held constant by the battery connected in parallel to it. Though the resistance is changed in the circuit, the current flowing though the resistors also changes, so it's a better gauge to look at p.d instead of current. • 2011 Nov 11 Fri 12:11pm Ok thanks a lot =) • 2011 Nov 14 Mon 12:43pm SQhi objects! Just so you, know, guys, I took the same paper =) IMHO, the brightness of the bulb should be dependent on the power output of the bulb, and we know that P=IV I assume everyone has a copy of the paper here. Let's call the branch on the left with L1 Branch A, and the branch on the right with resistor R & L2, Branch B. Miss Loi, as you have stated, since Branch A and Branch B are parallel, they would have the same pd as the emf of the battery remains unchanged. Hence, the decisive factor should be current instead. Now, as light intensity increases, resistance of R decreases, thus, the overall resistance of the circuit decreases. The resistance of Branch B also decreases, and the current flowing through Branch B increases. Hence, as Branch B draws a greater current, the current Branch A draws decreases. Hence, by P=IV, the brightness of bulb L1 would decrease. Oh well, one more bio and one more literature paper to go. JY everyone!!! P.S.: If you wonder why the brightness of L2 decreases, as resistance of R decreases, the pd across r decreases (Little Miss Loi edit: you mean increases?) hence pd across L2 increases, leading to higher power output. It's kind of "common-sensical" anyway=) • 2011 Nov 14 Mon 5:01pm @SQhi: Little Miss Loi objects!!!!! Yes, as SQhi has said, the brightness of the bulb ultimately depends on its power which in turn is dependent on the current flowing through it. Now, as light intensity increases, resistance of R decreases, thus, the overall resistance of the circuit decreases. The resistance of Branch B also decreases, and the current flowing through Branch B increases. This is true BUT this reduced overall resistance will also lead to an INCREASE in the overall current drawn by the circuit. Hence, as Branch B draws a greater current, the current Branch A draws decreases. Hence, by P=IV, the brightness of bulb L1 would decrease. Yes Branch B will definitely draw a greater current due to the reduced resistance of R, so L2 will increase in brightness. However, Branch A's current will remain the same given that 1) its pd is unchanged 2) the resistance of L1 is unchanged. So by Ohm's Law, the current through Branch A remains unchanged as well. I've taken the liberty to sub in some arbitrary p. d. and resistance values just to demonstrate this. As you will see, Branch A's current stays constant due to the increased overall current drawn by the circuit, even though Branch B's current has increased. In short, Ohm's Law will always sort itself out P.S. This reminds me to update the pdf file - there's a typo for the answer to Q38! • 2011 Nov 14 Mon 8:45pm Opps, typo in my post-note, sorry!(thanks for the correction!) Anyway ya, that was for other visitors, not you =) D= I have never considered Q38 from that angle... D= My O level physics is in jeopardy!!! Serious, nerve-wracking jeopardy... Yet, it feels so good to finally understand this, coz i expect myself to... Anyway could you let me know how to type subscript and superscript in these comments, and how I could edit my own posts? 17. Zhi Jie says 2011 Nov 11 Fri 12:00pm 17 i believe qn 22 and 23 are wrong. qn22 shld be B since is sin (air) / sin ( glass) =1.5 and qn we shld be C since its converging lens while the other 3 options are all diverging lens • 2011 Nov 11 Fri 6:54pm 17.1 @Zhi Jie: For Q22, it is correct to say that the angle of refraction in glass is 28.1° as you've mentioned. However, the question asks for how many degrees does the light ray change direction. Therefore, change in direction = 45° − 28° = 17° For Q23, see the diagrams I've painstakenly drawn 2011 Nov 11 Fri 12:29pm 18 Hais, why the questions so tricky one.. 19. Jolene Lim says 2011 Nov 11 Fri 12:37pm 19 Shouldn't the last question answer be 'C' because they are asking you about the frequency of the input of the cro, so it should be only about the screen and not counting the number of waves. • 2011 Nov 11 Fri 6:38pm 19.1 But in Q40, the screen displays two complete waveforms. Since frequency is the inverse of the period of the wave, and the period of the wave is the time taken for ONE wave, the period is 1/400 ÷ 2 = 1/800 s. Hence the frequency is (1/800)-1 which is 800 Hz. 20. fb @nil says 2011 Nov 11 Fri 12:55pm 20 why is q23 B? 21. anon says 2011 Nov 11 Fri 12:58pm 21 hi so qn 37 is D right? 22. Ian Pang says 2011 Nov 11 Fri 2:50pm 22 HI... can u explain to me why the ans for qn 33 is C? thks:) 23. Cheryl says 2011 Nov 11 Fri 5:51pm 23 ARGH.. i feel very extremely stupid after looking at this.. and how come everyone's so smart... 24. Dannie says 2011 Nov 11 Fri 5:52pm 24 qn11: my ans was C---becos when u move down spring, elastic potential stored at Q. When u release it is converted to kinetic energy, so at P gt kinetic energy. 25. 2011 Nov 11 Fri 6:00pm 25 for qn 26, i got D as answer. becos Npole created in left of solenoid and south pole at right side. so, poles induced in rods should be opoosite to poles in solenoid. eg. Pshould be south, which is opposite to noth pole of solenoid. so, i gt D as ans. pls comment ty 26. Dennis says 2011 Nov 11 Fri 8:55pm 26 Hi, for question 33, could you explain why the option is C and not B?? Much appreciated! 27. Skyhigh says 2011 Nov 11 Fri 9:24pm 27 Hi Miss Loi, can you explain your ans for qns 33 please.. i chose A too • 2011 Nov 12 Sat 1:40pm 27.1 @Skyhigh: It should be B not A! The fuse will only blow when • The live wire touches the neutral wire. • The live wire touches the earth wire. In both of these cases, infinite current will flow through the circuit/ground since there is no resistance in between (this is as good as a short circuit). As for the other cases involving a human read this. 28. Chloe says 2011 Nov 11 Fri 10:29pm 28 Why is question 33 answer C ? 29. Loi says 2011 Nov 11 Fri 11:09pm 29 shouldnt 33 be B ?? 30. marc says 2011 Nov 12 Sat 5:19am 30 question pp can post 31. Poo says 2011 Nov 12 Sat 9:04am 31 For question 33, i checked the internet and the answer should be B... It does not blow for the first three cases and blows for the last 2. When the person touches the live wire, however, he gets an electric shock but the fuse does not blow. I found the exact question here... http://sg.answers.yahoo.com/question/index?qid=20100312012525AA1rEAC Can you help to confirm this? Thanks 2011 Nov 12 Sat 10:21am 32 Hi Ms Loi, i think the ans for qn 33 must be B since as the fourth n fifth option will definitely result in blown fuse but for person touching live wire it's debatable because if current can flow through him with low resistance then the fuse will blow if normal case where resistance is high in human the fuse will not blow. • 2011 Nov 12 Sat 2:12pm 32.1 @radha: Yes I've amended it to B, the key point here is no matter what the value of the human body's resistance we take, chances are that we are still many times more resistant than that of a typical appliance drawing an operating current of <3A. From some of the figures I've seen, even a suicidal person immersing his entire foot in conductive liquid next to a power line on the shores of Bedok Reservior has at least 100 Ω 33. jess says 2011 Nov 12 Sat 10:38am 33 Hi what is the safest mark for this paper? 34. Leon says 2011 Nov 12 Sat 12:48pm 34 Hi, for qn 6, why is the ans A? Pls enlighten thanks shouldn't the tension of the string be in the opposite direction as the pull of the string? • 2011 Nov 12 Sat 3:17pm 34.1 @Leon: Since the question specifies that the nail does not move, the system is in equilibrium. This means that all three vector arrows must flow in a closed loop in order for the equilibrium situation to be maintained. Hence, the tension in the string should always be equal to the pull of the string. 35. 2011 Nov 12 Sat 1:26pm 35 Good morning (oops should be good afternoon) everyone! I'll have to agree with the chorus of protests regarding Question of the Year Q33 and changed the answer to B (sorry for the misinformation!) The main issue here is whether the 3A fuse will blow if a person touches the live wire. A typical home appliance say, a kettle, will typically have a resistance of around 20-50 Ω, and under normal operation will draw an operating current < 3A (else the fuse will blow). The human body, on the other hand, will have a resistance of 500 Ω to 1000 Ω to even 1 000 000 Ω (depending on conditions - some researched values are provided here). No matter what value we take, this resistance is at least a few orders of magnitude higher than the resistance of typical appliance and hence will draw a lower current to flow to it. But since a current as little as 10 mA is already enough to cause an electric shock, the person who touches the live wire will, in all likelihood, turn into Street Fighter Blanka suffering an electric shock with the fuse staying helplessly intact (as shown in the diagram below)! 36. Bell says 2011 Nov 12 Sat 4:35pm 36 Why is qn 39 a? Shouldn't it be b since the thermistor can decrease it's resistance to 0 as it is heated? So houldnt the output voltage decrease to 0 as it's heated? • 2011 Nov 13 Sun 11:40pm 36.1 @Bell: All thermistors have an operating temperature range which in turn will determine each of their upper and lower resistance limits. The lower resistance limit, however, will never be zero due to the makeup of its electrical components and materials. If you dwell deeper, you'll find that the relationship between resistance and temperature is actually exponential. And if you were to Google "thermistor resistance vs temperature chart" and view the images, you'll see in all the graphs that, given the exponential nature of this relationship, the resistances approaches zero as temperatures increases but will never reach zero. 37. sammy says 2011 Nov 13 Sun 9:20am 37 why is 36 B? 38. Casper says 2012 Apr 16 Mon 7:59pm 38 I found this really useful... I was looking for the Physics 5058 2011 question paper but I couldn't find it.. Can some one please tell me where it is? Thanks • 2012 Jun 12 Tue 10:42pm 38.1 @Casper: Umm ... this diagram illustrates what will probably happen should Miss Loi commit SEAB treason by uploading more of the hallowed question papers 12 Reactions 1. 2011 Nov 11 Fri 1:51am 39 2. ♔yipszeki tweeted 2011 Nov 11 Fri 1:56am 40 http://t.co/qtGl9Ck5 even though I don't take pure phy, but still... Here! Answers 3. Elmira tweeted 2011 Nov 11 Fri 2:31am 41 4. Bernice Tan tweeted 2011 Nov 11 Fri 2:45am 42 5. Charissa Lee tweeted 2011 Nov 11 Fri 2:45am 43 6. Daphne Lee tweeted 2011 Nov 11 Fri 3:14am 44 7. Sean Lim tweeted 2011 Nov 11 Fri 3:31am 45 8. tweeted 2011 Nov 11 Fri 3:41am 46 @taectoo RT @yyumlicious: http://t.co/aYs90xas even though I don't take pure phy, but still... Here! Answers 9. Qingwan Kuah tweeted 2011 Nov 11 Fri 4:44am 47 answers for physics paper 1! RT @MissLoi: Retweet This Blog Post http://t.co/4Q6KKckr 10. SYR tweeted 2011 Nov 11 Fri 7:28am 48 11. 2011 Nov 11 Fri 9:14am 49 12. 2011 Nov 12 Sat 10:41am 50 [...] DCCDA BDDAD from : http://www.exampaper.com.sg/little-miss-loi-the-science-tutor/gce-o-level-2011-oct-nov-physics-5058-...  PS: I think q 23 is wrong? D: i got C. Ooh wrong 4 qn or so. . Not bad for one who never got [...] • Latest News The completion of the Second Temple is nigh! Just in time for the June Holiday Intensive Jφss Sticks tuition sessions to save EMaths, AMaths, Chemistry, Physics & Social Studies students who may be still reeling from their "满江红" mid-year exam results :\ Contact Miss Loi to confirm your salvation and rid yourself of the LMBFH Syndrome from the subject(s) of your choice this June holiday! Subscribe to Miss Loi's 2012 Maths Exam Papers now, with a subscription period that lasts right up till 31 Dec 2013! Suggested solutions & answers to selected Oct/Nov 2012 GCE 'A'/'O' Level Maths (EMaths, AMaths, H1, H2 Maths & Science (Physics, Chemistry, Combined Science) papers are out! Check out them out here » * Please refer to the relevant Ten-Year Series for the questions of these suggested solutions. • Study Room Access • WP Cumulus Flash tag cloud by Roy Tanck and Luke Morton requires Flash Player 9 or better. More Mathematical Topics → • Greatest Chatterboxes (of the Month) Subscribe to: Miss Loi is a full-time private tutor in Singapore specializing in O-Level Maths tuition. Her life's calling is to eradicate the terrifying LMBFH Syndrome off the face of this planet. For over 22 years she has been a savior to countless students ... [read more] • Gratitudes You’ve helped me keep afloat amidst the intimidating maths tsunamis that threaten to engulf me - from a perpetually 'last-in-class-for-P5P6-Maths' to a 'lol-I-passed Amath!!' student, and occasionally shocking myself with surprisingly decent maths grades ... [read more] Lisabelle Tan Singapore Chinese Girls’ School
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6931630969047546, "perplexity": 2607.904852686388}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705318091/warc/CC-MAIN-20130516115518-00077-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/139476/why-is-pure-sample-covariance-a-bad-metric-to-understand-the-degree-of-correlati
# Why is pure sample covariance a bad metric to understand the degree of correlation between two variables? Covariance helps you understand how variables are linearly related. Would it be possible to have two pairs of variables in a deterministic relationship (i.e. linearly correlated variables) that have different values for the covariance? My guess would be that if you had one pair with low sample variances and the other pair with high sample variances, you could have the same relationship but a different covariance. Am I on the right path? • You are not only on the right path but you've nailed it! This is why correlation is considered a "standardized" covariance. – TrynnaDoStat Feb 26 '15 at 19:23 • @TrynnaDoStat I think you could make your comment into an answer. – Patrick Coulombe Feb 26 '15 at 19:28 One of the reasons covariance is not a good way to measure the strength of a linear relationship is because it is not invariant to deterministic linear transformations. Let $X$ and $Y$ be random variables and let $a$ and $b$ be real numbers. Covariance has the property that if $X,Y$ are random variables then $Cov(aX,bY)=abCov(X,Y)$. Multiplying two random variables by a constant changes the covariance and this is an undesirable property for measuring the strength of a linear transformation. As an example of why this is undesirable, consider a scenario where you want to measure the linear relationship between height and weight. If you measure height in inches, you would want your measurement for strength of linear relationship to be the same as it would be if you measured height in feet (feet multiplies inches by the constant 1/12). However, the correlation coefficient is invariant to deterministic linear transformations. That is, $Corr(aX,bY) = Corr(X,Y)$ and $Corr(X+a,Y+b) = Corr(X,Y)^*$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336923122406006, "perplexity": 213.33540436403445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00321.warc.gz"}
https://mathematica.stackexchange.com/questions/60772/increasing-maxpoints-in-ndsolve-results-in-memory-issue
# Increasing MaxPoints in NDSolve results in memory issue I am interested in increasing "MaxPoints" in NDSolve's "MethodOfLines" in attempt to increase the resolution of the plot of the solution of linearly damped wave equation with transparent boundary conditions and square pulse initial condition. Here is my code: interpolatingFunctLinear[initalPulseFunction_, xBoundLow_, xBoundHigh_, timeBound_] := First[ pdeY = D[Y[x, t], t, t] + .04 D[Y[x, t], t] == D[Y[x, t], x, x]; solnDerivativeY = NDSolve[{pdeY, Y[x, 0] == initalPulseFunction, Derivative[0, 1][Y][x, 0] == 0, Derivative[1, 0][Y][xBoundLow, t] == Derivative[0, 1][Y][xBoundLow, t], Derivative[1, 0][Y][xBoundHigh, t] == -Derivative[0, 1][Y][xBoundHigh, t]}, Y, {x, xBoundLow, xBoundHigh}, {t, 0, timeBound }, Method -> {"MethodOfLines", "SpatialDiscretization" -> {"TensorProductGrid", "MaxPoints" -> 2000}}]] Here is my square wave: Piecewise[{{1 , Abs[x] <= 8.40749}, {0, Abs[x] > 8.40749}}] Now running the whole thing together, we get the following: interpolatingFunctLinear[Piecewise[{{1 , Abs[x] <= 8.40749}, {0, Abs[x] > 8.40749}}], -200, 200, 300] Manipulate[ Show[Plot[ Evaluate[{Y[x, t] /. solnDerivativeY} /. t -> \[Tau]], {x, -200, 200}, PlotRange -> {{-200, 200}, {0, 1.1}}]], {\[Tau], 0, 300}] Notice that the initial square wave is defined as having a value of 1 in the interval |x|< 8.407 and rest zero everywhere else. I wanted to smooth out the spiky effects that seems to be present in the plot by setting "MaxPoints" to a very very large number such as 100,000. I believe that these spiky effects are due to initial square wave not having a continuous derivatives at the edges of the square wave pulse. However, when I chooses "MaxPoints" -> 4000, Mathematica throws a "No Memory Available" error and shuts down. First I want to know how to let NDSolve smoothly handles these "jumpy" inital wave pulses (such as triangle, square wave pulses). Then, I want to know how to progressively increase numerical precision so that I can smooth out these spiky effects within NDSolve. • "FiniteElement" in v10 might help. (Not tested, the wolfram cloud is currently down. ) See for example this, this and this post. – xzczd Sep 27 '14 at 7:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5119141936302185, "perplexity": 6415.921443495193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00162.warc.gz"}
http://mathhelpforum.com/math-topics/220101-arithmetic-progression.html
# Math Help - Arithmetic Progression 1. ## Arithmetic Progression Question:- Find first term of arithmetic progression whose 7th and 10th terms are 35 and 59 respectively ? 2. ## Re: Arithmetic Progression Originally Posted by brosnan123 Question:- Find first term of arithmetic progression whose 7th and 10th terms are 35 and 59 respectively ? The terms of an AP look like $s_n=a+(n-1)d$ where $a$ is the first term and $d$ is the common difference. Now use the given to solve. Because we are not a homework service, you must post some effort in order to receive more help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020254969596863, "perplexity": 559.5136620575221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862636.1/warc/CC-MAIN-20150124161102-00179-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.zora.uzh.ch/48328/
Quick Search: Browse by: Zurich Open Repository and Archive Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-48328 # Ewerz, C; von Manteuffel, A; Nachtmann, O (2011). On the energy dependence of the dipole-proton cross section in deep inelastic scattering. Journal of High Energy Physics, (3):62. Preview Accepted Version PDF 387kB View at publisher Preview Published Version PDF 535kB ## Abstract We study the dipole picture of high-energy virtual-photon-proton scattering. It is shown that different choices for the energy variable in the dipole cross section used in the literature are not related to each other by simple arguments equating the typical dipole size and the inverse photon virtuality, contrary to what is often stated. We argue that the good quality of fits to structure functions that use Bjorken- x as the energy variable — which is strictly speaking not justified in the dipole picture — can instead be understood as a consequence of the sign of scaling violations that occur for increasing Q 2 at fixed small x. We show that the dipole formula for massless quarks has the structure of a convolution. From this we obtain derivative relations between the structure function F 2 at large and small Q 2 and the dipole-proton cross section at small and large dipole size r, respectively. ## Citations 3 citations in Web of Science® 3 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96490478515625, "perplexity": 1610.4575438000315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118888119.65/warc/CC-MAIN-20150124170128-00107-ip-10-180-212-252.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/34687/two-elementary-games-in-number-theory
# two elementary games in number theory I solved these two problems from a programming challenge website: numgame and numgame2. These two problems are very similar. In the first one, the position is a number $n$ and each player can subtract from $n$ a divisor $d$ of $n$ with $1 \leq d < n$. Players Alice and Bob alternate with Alice going first and the first person unable to move loses. The second problem is similar except each player can subtract a prime number $p$, or 1, from the current position $n$, with $p < n$ ($p$ is not necessarily a divisor of $n$). We assume that the players play optimally and, as usual, ask who the winner is given the initial value of $n$. The claim is that in the first game Alice wins if $n \equiv 0 \pmod{2}$ and Bob wins if $n \equiv 1 \pmod{2}$, while in the second game Alice wins if $n \equiv 1 \pmod{4}$ and Bob wins otherwise. I'm looking for someone to give me a proof of these answers, or some hints to start. - If possible, please add into the question the rules of the games, and not just a link and a jumbled explanation. – Asaf Karagila Apr 23 '11 at 10:50 In particular the rules include that two players Alice and Bob alternate. While the first game (numgame) has Alice play first, the second game (numgame2) requires Bob to play first. In both games a player loses if no valid move is available at their turn, i.e. if the remaining number is 1. – hardmath Apr 23 '11 at 11:14 In the first problem the player who starts with a prime number loses, not necessarily 1. – Vicfred Apr 23 '11 at 11:23 Because the rules of the first game allow for a proper divisor, if the player Alice starts with n = 2 (a prime), she has a valid move in taking away 1, leaving Bob with a losing position. Apart from that, of course, primes are odd and thus illustrate the claimed classification of outcomes for the first game (n odd gives Bob a win). – hardmath Apr 24 '11 at 10:39 Hint for the first problem: If the current position $n$ is even and it's Alice's turn, can she always make a move into an odd position? If the current position $n>1$ is odd and it's Bob's turn, what can you prove about the parity of the position he must move into? - Second Problem: Bob can never change a number of the form $4k+1$ into a number of the same form (primes and 1 tend to be not divisible by 4). Alice can always change a number not of this form into a number of this form by substracting 1,2 or 3 as needed. - Hint for the first problem: Who wins if $n=2$, who wins if $n=1$? If $n$ is even, can Alice assure that after 2 turns (one turn by Alice and one by Bob) the number is still even? - 1st Game 1) let n = value of current position. 2) n = 1 x n, so n is always divisible by 1 3) if n is even, Alice can always subtract 1 leaving Bob with odd value n 4) an odd number has no even divisors since if n/2a = b, then n = 2ab and is even. 5) if n is odd, Bob must always leave Alice with an even value n since all divisors are odd and he must subtract an odd number from n, while the difference between 2 odd numbers must always be even ((2x+1)-(2y+1) = 2(x-y)). 6) thus, by 3) & 5) with each subsequent pair of turns, Alice can always leave Bob with a smaller odd number position n. 7) This strategy leads to a descent that eventually must leave Bob with the odd position n = 1, and he loses. The descent could be hastened by Alice if she chose the largest odd divisor to subtract instead of 1 (which is the smallest) when it was her turn. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.597148597240448, "perplexity": 453.05437791103327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445080.12/warc/CC-MAIN-20151124205405-00331-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/francis-is-learning-about-dissolving-at-school-she--118380
# Francis is learning about dissolving at school. She reads descriptions about the rate of dissolving from four of her friends. George: The rate of dissolving can be fast or slow. The only thing you can do to speed up the rate of dissolving is to make the temperature hotter. Manuel: The rate of dissolving can change. The rate of dissolving gets faster when the particles are smaller, the temperature is hotter, or the mixture is stirred. Kim: The rate of dissolving speeds up when the particles are l ###### Question: Francis is learning about dissolving at school. She reads descriptions about the rate of dissolving from four of her friends. George: The rate of dissolving can be fast or slow. The only thing you can do to speed up the rate of dissolving is to make the temperature hotter. Manuel: The rate of dissolving can change. The rate of dissolving gets faster when the particles are smaller, the temperature is hotter, or the mixture is stirred. Kim: The rate of dissolving speeds up when the particles are larger and the temperature is hotter. Stirring the mixture also helps increase the rate of dissolving. Chris: The rate of dissolving can speed up or slow down. Heating a mixture, stirring a mixture, or using smaller particles will slow down the rate of dissolving. Which of her friends has the best description about the rate of dissolving? A George B Manuel C Kim D Chris ### What is one way a sitcom will get an audience to feel a particular emotion? What is one way a sitcom will get an audience to feel a particular emotion?... ### Juan has 9 quarters. How many quarters can he put in his red bank and how many quarters can he put in his blue bank? Juan has 9 quarters. How many quarters can he put in his red bank and how many quarters can he put in his blue bank?... ### If 9/10 yard of material is needed to make a blanket, how many blankets can be made from 27 yards of material If 9/10 yard of material is needed to make a blanket, how many blankets can be made from 27 yards of material... ### At an art exhibition there are 12 paintings of which 10 are original. A visitor selects a painting at random and before decides to buy he asks the opinion of an expert about the authenticity of the painting. The expert is right in 9 out of 10 cases on average. Given that the expert decides the painting is authentic what is the probability it really is? If the expert decides the paining is a copy then the visitor returns it and chooses another one what is the probability that her second choice is At an art exhibition there are 12 paintings of which 10 are original. A visitor selects a painting at random and before decides to buy he asks the opinion of an expert about the authenticity of the painting. The expert is right in 9 out of 10 cases on average. Given that the expert decides the paint... ### Find the total capacity of 25 containers of cranberry juice if each bottle holds 5 quarts 1 pint 6 oz Find the total capacity of 25 containers of cranberry juice if each bottle holds 5 quarts 1 pint 6 oz... ### If p is a positive integer, and 4 is the remainder when p-8 is divided by 5, which of the following could be the value of p? If p is a positive integer, and 4 is the remainder when p-8 is divided by 5, which of the following could be the value of p?... ### Mrs. Chin paid a 20 percent tip on the bill for lunch. 20% $2.75 Percents 20%$2.75 20% $2.75 Total 100% 20% 20%$2.75 $2.75 If the tip amount was$2.75, what was the bill for lunch before the tip was added to it? O $5.50 O$13.75 O $16.50 O$55.00 Mrs. Chin paid a 20 percent tip on the bill for lunch. 20% $2.75 Percents 20%$2.75 20% $2.75 Total 100% 20% 20%$2.75 $2.75 If the tip amount was$2.75, what was the bill for lunch before the tip was added to it? O $5.50 O$13.75 O $16.50 O$55.00... ### President reagan appointed sandra day o’connor as the first female _____. President reagan appointed sandra day o’connor as the first female _____.... ### I have never felt as ___________ as I did when I watched that horror film? A. terrify B. terrified C. terrifying D. terible I have never felt as ___________ as I did when I watched that horror film? A. terrify B. terrified C. terrifying D. terible... ### Explain what probability is. Identify the two types of probability and how we find them. Explain what probability is. Identify the two types of probability and how we find them....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4727345407009125, "perplexity": 1691.4124564362764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00577.warc.gz"}
https://www.gsnmagazine.com/clear-vanilla-ktn/page.php?120a28=an-asymmetric-key-cipher-uses-how-many-keys
These two attributes allow us to perform two separate operations with a Key Pair. Normally the answer is two, a public key and a private key. If the keys correspond then the message is decrypted. It also requires a safe method to transfer the key from one party to another. Christina uses her Secret Key, 2, and Ajay’s Public Key, 15, to get a shared secret of 30. The asymmetric key algorithm creates a secret private key and a published public key. ... B. For example, the public key that you use to transfer your bitcoins is created from the private key by applying a series of a cryptographic hash. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. It ensures that malicious persons do not misuse the keys. Asymmetric ciphers are quite slow when compared with the symmetric ones, which is why asymmetric ciphers are used only to securely distribute the key. But you could design a cryptosystem which supports having an arbitrary number of either. Q-6 Correct Answer :cipher Q-7 Correct Answer :key Q-8 Correct Answer :symmetric Q-9 Correct Answer :symmetric Q-10 Correct Answer :asymmetric www.examradar.com key is called the secret key. Asymmetric encryption (or public-key cryptography) uses a separate key for encryption and decryption. Symmetric-key ciphers are also known as secret-key ciphers since the shared key must be known only to the participants. Symmetric versus Asymmetric. Public key cryptography is a kind of asymmetric cryptography . During the handshake, the client and the web server will use: A key exchange algorithm, to determine how symmetric keys will be exchanged; An authentication or digital signature algorithm, which dictates how server authentication and client authentication (if required) will be implemented; A bulk encryption cipher, which is used to encrypt the data D D Correct 18 How many encryption keys are required to fully int an asymmetric from CIS 502 at Strayer University, Washington. a pair of keys is used. Uses asymmetric keys alone or in addition to symmetric keys. A public/private key pair is generated whenever a new instance of an asymmetric … 20 C. 45 D. 100. The SSH protocol uses an asymmetric key algorithm to authenticate users and encrypt data transmitted. public key cryptography. If the cipher illustrated in Figure 8.1 were a symmetric-key cipher, then the encryption and decryption keys would be identical. key and an asymmetric cipher (ECDHE_RSA) with a 2,048-bit key. Asymmetric cryptography is a type of encryption where the key used to encrypt the information is not the same as the key used to decrypt the information. However, decryption keys (private keys) are secret. In this section of Data Communication and Networking - Cryptography MCQ (Multiple Choice) Based Questions and Answers,it cover the below lists of topics.All the Multiple Choice Questions and Answers (MCQs) have been compiled from the book of Data Communication and Networking by The well known author behrouz forouzan. (We'll take a look at the alternative, public-key … A) private B) public C) either (a) or (b) D) neither (a) nor (b) Answer: A. These ciphers use asymmetric algorithms which use one key to encrypt data and a different key to decrypt ciphers. The public key is used to encrypt data, and the private key is used to decrypt data. Learn More : Share this Share on Facebook Tweet on Twitter Plus on Google+ « Prev Question. It is common that once asymmetric-keys are set up, and a secure channel created, symmetric keys are then used for encrypting all following messages that are to be sent, signed with the asymmetric-key. Symmetric Cryptography , it needs n(n-1)/2 keys This too, fails to decrypt the message. An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetric cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 512 bits, all have similar difficulty at present. A) private B) public C) either (a) or (b) D) neither (a) nor (b) Answer: A. The SSH server generates a pair of public/private keys for the connections. We’ve established how Asymmetric encryption makes use of two mathematically linked keys: One referred to as the Public Key, and the other referred to as the Private Key. Public key encryption is by far the most common type of asymmetric cryptography. 1 Key 2 Key 3 Key 4 Key . Then if the recipient wants to decrypt the message the recipient will have to use his/her private key to decrypt. In symmetric encryption, there is only one key, and all parties involved use the same key to encrypt and decrypt information. Use of an asymmetric cipher also solves the scalability problem. The use of two keys in Asymmetric encryption came into the scene to fix an inherent weakness with the symmetric cipher. An asymmetric-key (or public-key) cipher uses. This way only the intended receiver can decrypt the message. Then, Alice and Bob can use symmetric cipher and the session key to make the communication confidential. My guess: For symmetric they each need to maintain and transfer their own key, so probably $1000 \times 1000$, and for asymmetric maybe just $2000$, each having one public one private. The keys are simply large numbers which are paired together however they are asymmetric means not identical. private key. This type of cipher uses a pair of different keys to en-crypt and decrypt data. Secret keys are exchanged over the Internet or a large network. 2 2. MCQ 94: A straight permutation cipher or a straight P-box ... Frames . To use asymmetric cryptography, Bob randomly generates a public/private key pair. In this example, 24 = 16. Introduction To Asymmetric Encryption. Asymmetric Key Encryption: Asymmetric Key Encryption is based on public and private key encryption technique. Wrapping Keys Keys that are used to encrypt other keys. Symmetric Encryption. Asymmetric key encryption algorithm is used? Answer key for MCQ SET- 1 -key -key -key Encrypting files before saving them to a storage device uses a symmetric key algorithm because the same key … Asymmetric encryption uses the public key of the recipient to encrypt the message. Asymmetric cryptography which can be also called as public key cryptography, uses private and public keys for encryption and decryption of the data. Anyone can use the encryption key (public key) to encrypt a message. For example, when I connect to the British Government portal gov.uk I get a TLS connection that uses AES_256_CBC (with a 256-bit key) set up using RSA with a 2,048-bit key. MCQ 95: The DES algorithm has a key length of _____. Asymmetric keys can be either stored for use in multiple sessions or generated for one session only. For symmetric single key is used to encrypt and decrypt while communicating via cipher while in asymmetric two key are used, one for encryption one for decryption. MCQ 96: In Asymmetric-Key Cryptography, the two keys, e and d, have a special relationship to. In this system, each user has two keys, a public key and a private key. To determine the number of keys in a key space, raise 2 to the power of the number of bits in the key space. Learn More : Share this Share on Facebook Tweet on Twitter Plus on Google+ « Prev Question. These are called hybrid encryption systems. ... How many encryption keys are required to fully implement an asymmetric algorithm with 10 participants? Encryption types can be easily divided into these two categories: symmetric encryption, or single-key encryption, and asymmetric encryption, or public-key encryption. how many keys are used with asymmetric (public-key) ... which of the following is used in conjunction with a local security authority to generate the private and public key pair used an asymmetric cryptography. This is done because symmetric encryption is generally faster than public key encryption. Mary wants to send a message to Sam so that only Sam can read it. For example, it is common to use public/private asymmetric keys for an initial exchange of symmetric private keys. (b) How many keys are required for two people to communicate via a cipher (for both symmetric and asymmetric)? 16 Bits 64 Bits 128 Bits 32 Bits . Symmetric encryption uses a private key to encrypt and decrypt an encrypted email. How many keys are required for secure communication among 1000 person if: Symmetric key encryption algorithm is used? In these systems, an asymmetric algorithm is used to establish a connection. It is important to note that anyone with a secret key can decrypt the message and this is why asymmetric encryption uses two related keys to boosting security. Running key cipher C. Skipjack cipher D. Twofish cipher. Asymmetric is also known as public-key cryptography, Asymmetric encryption is a relatively new area when compared to the age-old symmetric encryption. There are many systems that make use of both symmetric and asymmetric keys. Asymmetric encryption is more complicated than symmetric encryption, not only because it uses public and private keys, but because asymmetric encryption can encrypt / decrypt only small messages, which should be mapped to the underlying math of the public-key cryptosystem.Some cryptosystems (like ECC) do not provide directly encryption primitives, so more complex schemes should be used. Then, a key is transferred between the two systems. 4 He allows everyone access to the public key, including Alice. In an asymmetric-key cipher, the receiver uses the _____ key. Then, when Alice has some secret information that she would like to send to Bob, she encrypts the data using an appropriate asymmetric algorithm and the public key generated by Bob. We’ve also established that what one key encrypts, only the other can decrypt. Symmetric encryption algorithms use the same encryption key for both encryption and decryption. Asymmetric cryptography using key pairs for each of the users needs ' n ' number of key for n users. These systems often make use of a key exchange protocol like the Diffie-Hellman algorithm. What are the essential ingredients of a symmetric cipher? Asymmetric encryption uses two keys to encrypt a plain text. CSP. (Unlike asymmetric encryption algorithms, which use two different keys.) You might also have seen other key lengths in use. A. In an asymmetric-key cipher, the sender uses the_____ key. asymmetric key. Depending on the type of cryptographic system used, the public key is obtained from an encryption of the private key or vice versa. Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext.The keys may be identical or there may be a simple transformation to go between the two keys. In Symmetric-key encryption the message is encrypted by using a key and the same key is used to decrypt the message which makes it easy to use but less secure. 10 B. While the public key can be made generally available, the private key should be closely guarded. Typically, those two keys are called public and private keys, as is the case with RSA encryption. Encryption algorithms, in general, are based in mathematics and can range from very simple to …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2048715204000473, "perplexity": 1293.1056922584949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00121.warc.gz"}
https://www.atlantis-press.com/proceedings/icsem-13/5634
Object Tracking Based on Corrected Background-Weighted Histogram Mean Shift and Kalman Filter Authors Yu Yang, Yongxing Jia, Chuanzhen Rong, Ying Zhu, Yuan Wang, Zhenjun Yue, Zhenxing Gao Corresponding Author Yu Yang Available Online April 2013. DOI https://doi.org/10.2991/icsem.2013.140How to use a DOI? Keywords object tracking, mean shift, background information, Kalman filter Abstract The classical mean shift (MS) algorithm is the best color-based method for object tracking. However, in the real environment it presents some limitations, especially under the presence of noise, objects with partial and full occlusions in complex environments. In order to deal with these problems, this paper proposes a reliable object tracking algorithm using corrected background-weighted histogram (CBWH) and the Kalman filter (KF) based on the MS method. The experimental results show that the proposed method is superior to the traditional MS tracking in the following aspects: 1) it provides consistent object tracking throughout the video; 2) it is not influenced by the objects with partial and full occlusions; 3) it is less prone to the background clutter. Open Access Proceedings 2nd International Conference On Systems Engineering and Modeling (ICSEM-13) Part of series Publication Date April 2013 ISBN 978-94-91216-42-8 DOI https://doi.org/10.2991/icsem.2013.140How to use a DOI? Open Access TY - CONF AU - Yu Yang AU - Yongxing Jia AU - Chuanzhen Rong AU - Ying Zhu AU - Yuan Wang AU - Zhenjun Yue AU - Zhenxing Gao PY - 2013/04 DA - 2013/04 TI - Object Tracking Based on Corrected Background-Weighted Histogram Mean Shift and Kalman Filter BT - 2nd International Conference On Systems Engineering and Modeling (ICSEM-13) PB - Atlantis Press UR - https://doi.org/10.2991/icsem.2013.140 DO - https://doi.org/10.2991/icsem.2013.140 ID - Yang2013/04 ER -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18331778049468994, "perplexity": 8132.401970111999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525374.43/warc/CC-MAIN-20190717181736-20190717203736-00188.warc.gz"}
https://dorigo.wordpress.com/tag/black-holes/
## Black holes hype does not decayFebruary 3, 2009 Posted by dorigo in astronomy, Blogroll, cosmology, humor, news, physics, politics, religion, science. Tags: , , While the creation of black holes in the high-energy proton-proton collisions that LHC will hopefully start providing this fall is not granted, and while the scientific establishment is basically unanimous in claiming that those microscopical entities would anyway decay in a time so short that even top quarks look longevous in comparison, the hype about doomsday being unwittingly delivered by the hands of psychotic, megalomaniac CERN scientists continues unhindered. Here are a few recent links on the matter (thanks to M.M. for pointing them out): The source of the renewed fire appears to be a paper published on the arxiv a couple of weeks ago. In it, the authors (R. Casadio, S. Fabi, and B. Harms) discuss a very specific model (a warped brane-world scenario), in whose context microscopic black holes might have a chance to survive for a few seconds. Never mind the fact that the authors say from the very abstract, as if feeling the impending danger of being strumentalized, “we argue against the possibility of catastrophic black hole growth at the LHC“. This is not the way it should be done: you cannot assume a very specific model, and then draw general conclusions, because others opposing your view may always use the same crooked logic and reverse the conclusions. However, I understand that the authors made a genuine effort to try and figure out what could be the phenomenology of microscopic black holes created in the scenario they considered. The accretion of a black hole may occur via direct collision with matter and via gravitational interactions with it. For microscopic black holes, however, the latter (called Bondi accretion) is basically negligible. The authors compute the evolution of the mass of the BH as a function of time for different values of a critical mass parameter $M_c$, which depends on the model and is connected to the characteristic thickness of the brane. They explicitly make two examples: in the first, when $M_c=100 kg$,  a 10 TeV black hole, created with 5 TeV/c momentum, is shown to decay with a roughly exponential law, but with lifetime much longer -of the order of a picosecond- than that usually assumed for a micro-BH evaporating through Hawking radiation. In the second case, where $M_c=10^6 kg$, the maximum BH mass is reached at $3.5 \times 10^{21} kg$ after about one second. Even in this scenario, the capture radius of the object is very small, and the object decays with a lifetime of about 100 seconds. The authors also show that “there is a rather narrow range of parameters […] for which RS black holes produced at the LHC would grow before evaporating“. In the figure on the right, the 10-base logarithm of the maximum distance traveled by the black hole (expressed in meters) is computed as a function of the 10-base logarithm of the critical mass (expressed in kilograms), for a black hole of 10 TeV mass produced by the LHC with a momentum of 5 TeV/c. As you can see, if the critical mass parameter is large enough, these things would be able to reach you in your bedroom. Scared ? Let’s read their conclusions then. “[…] Indeed, in order for the black holes created at the LHC to grow at all, the critical mass should be $M_c>10^5 kg$. This value is rather close to the maximum compatible with experimental test of Newton’s law, that is $M_c=10^6 kg$ (which we further relaxed to $M_c=10^8 kg$ in our analysis). For smaller values of $M_c$, the black holes cannot accrete fast enough to overcome the decay rate. Furthermore , the larger $M_c$ is taken to be, the longer a black hole takes to reach its maximum value and the less time it remains near its maximum value before exiting the Earth. We conclude that, for the RS scenario and black holes decribed by the metric [6], the growth of black holes to catastrophic size does not seem possible. Nonetheless, it remains true that the expected decay times are much longer (and possibly >>1 sec) than is typically predicted by other models, as was first shown in [4]”. Here are some random reactions I collected from the physics arxiv blog -no mention of the author’s names, since they do not deserve it: • This is starting to get me nervous. • Isn’t the LHC in Europe? As long as it doesn’t suck up the USA, I’m fine with it. • It is entirely possible that the obvious steps in scientific discovery may cause intelligent societies to destroy themselves. It would provide a clear resolution to the Fermi paradox. • I’m pro science and research, but I’m also pro caution when necessary. • That’s what I asked and CERN never replied. My question was: “Is it possible that some of these black might coalesce and form larger black holes? larger black holes would be more powerful than their predecessors and possibly aquire more mass and grow still larger.” • The questions is, whether these scientists are competent at all, if they haven’t made such analysis a WELL BEFORE the LHC project ever started. • I think this is bad. American officials should do something about this because if scientists do end up destroying the earth with a black hole it won’t matter that they were in Europe, America will get the blame. On the other hand, if we act now to be seen dealing as a responsible member of the international community, then, if the worst happens, we have a good chance of pinning it on the Jews. • The more disturbing fact about all this is the billions and billions being spent to satisfy the curiosity of a select group of scientists and philosophers. Whatever the results will yield little real-world benefit outside some incestuous lecture circuit. • “If events at the LHC swallow Switzerland, what are we going to do without wrist watches and chocolate?” Don’t worry, we’ll still have Russian watches. they’re much better, faster even. It goes on, and on, and on. Boy, it is highly entertaining, but unfortunately, I fear this is taking a bad turn for Science. I tend to believe that on this particular issue, no discussion would be better than any discussion -it is like trying to argue with a fanatic about the reality of a statue of the Virgin weeping blood. … So, why don’t we just shut up on this particular matter ? Hmm, if I post this, I would be going against my own suggestion. Damned either way. ## Black holes, the winged seeds of our UniverseJanuary 8, 2009 Posted by dorigo in astronomy, cosmology, news, science. Tags: , , From Percy Bysshe Shelley’s “Ode to the West Wind” (1819), one of my favourite poems: […]O thou, Who chariotest to their dark wintry bed The winged seeds, where they lie cold and low, Each like a corpse within its grave, until Thine azure sister of the Spring shall blow Her clarion o’er the dreaming earth, and fill (Driving sweet buds like flocks to feed in air) With living hues and odors plain and hill: Wild Spirit, which art moving everywhere; Destroyer and preserver; hear, oh, hear! The winged seeds -of galaxies, and ultimately of everything that there is to see in our Universe- appear today to be black holes: this is what emerges from the studies of Chris Carilli, of the National Radio Astronomy Observatory (NRAO). In a press release of January 6th, Carilli explains that the evidence that black holes are antecedent to galaxy formation is piling up. In a nutshell, there appears to be a constant ratio between the mass of objects like galaxies and giant globular clusters and the black hole they contain at their center. This has been known for a while -I learned it at a intriguing talk by Al Stebbins at the “Outstanding Questions in Cosmology” conference, in March 2007 at the Imperial College of London. But what has been discovered more recently is that the very oldest objects contain more massive black holes than expected, a sign that black holes started growing earlier than their surroundings. This is incredibly interesting, and I confess I had always suspected it, when looking at the beautiful spiral galaxies, attracted in a giant vortex by their massive center. I think this realization is a true gate to a deeper understanding of our Universe and its formation. A thought today goes to Louise, who has always held that black holes have a special role in the formation of our Universe. ## Interviewed for Nature (the magazine…)September 10, 2008 Posted by dorigo in internet, news, personal, physics, science. Tags: , , Yesterday I had lunch at the Meyrinoise (courtesy Nature) with Geoff Brumfiel, a reporter from Nature (the magazine, not the bitch) who came to CERN to witness the big media event of today. We talked about several things, but in the end what was left to discuss for the podcast we recorded was the least interesting thing of all -the fact that we are not going to disappear in a black hole after LHC will eventually start colliding beams operation (which, for the absent-minded among you, hasn’t happened yet -only a beam at a time has been circulated in the machine so far). In any case, you can hear some more interesting interviews along with mine at the Nature site, specifically here. UPDATE: fixed the link to the main page of Nature news. There, you find my pic (not a good one actually) linking to the interviews. UPDATE: Hmm, if Nature (the magazine AND a little bit of a bitch today) keeps changing the address of pages I link, I am going to download the darn site here and stop worrying. Anyway, here is the updated link to the LHC special. ## Strasbourg clears the last hurdle to LHCSeptember 1, 2008 Posted by dorigo in humor, news, physics, science. Tags: , The European Court of Human Rights in Strasbourg has rejected today the appeal by a group of doomsday-scenario aficionados led by Markus Goritschnig, who claimed the experiment violated Article 2 and Article 8 of the European Convention for Human Rights, which grants the right to live and the right to the respect of human and family life, respectively. I am so fed up with such claims, soooo fed up, that a small but non negligible part of me is actually rooting for black holes being actually produced by LHC, for Hawking radiation being a gross mistake, and for the very first black hole created at LHC startup to swallow our whole solar system. It would be a reasonable punishment for the opposers of the LHC if, after losing all their battles, they were finally torn to smithereens by a black hole. Of course, the fact that the rest of us would also have to die the same horrible death is a small price to pay for being direct observers of such a sublime punishment – death by the very device they claimed to fear, because their claim was groundless, despite being true. ## More math illiteracySeptember 1, 2008 Posted by dorigo in internet, news, physics, science. Tags: , In my previous post I complained about the utter inability of many italian reporters to pay attention to numerical figures in their pieces, as if the hard data they sometimes have to unwillingly report was a nuisance. Today, upon reading the news on the site of the other main italian newspaper, Repubblica, I saw another example of that effect. And it is an even more annoying one, for several reasons. First, because Repubblica is the newspaper I prefer among the two. Second, because it appears in a science-related piece, written by a reporter who is supposed to pay attention to the data he produces. Third, because it concerns the LHC. In a piece titled “Fermate il test sul Big Bang o la Terra sparirà” (Stop the Big Bang test or Earth will vanish) Enrico Franceschini wrote a rather sloppy account of the issue of micro-black-hole production by the LHC. For instance, sloppiness is apparent when he writes: “…ci sono scarse possibilità che l’acceleratore formi un buco nero capace di porre una minaccia concreta al pianeta…” (there are slim chances that the accelerator creates a black hole capable of posing a concrete threat to the planet) Slim chances… Oh well. I am rather more pissed by the following statement: “Vero è che il nuovo acceleratore ha suscitato attenzioni e polemiche perché è il più grande mai costruito, con una circonferenza di 26 chilometri e la possibilità di lanciare particelle atomiche 11.245 volte al secondo prima di farle scontrare una contro l’altra a una temperatura 100mila volte più alta di quella che esiste al centro del sole.” (.…the possibility to launch atomic particles 11,245 times a second before having them collide one against the other at a temperature 100 thousand times higher than the one existing at the center of the sun.) Atomic particles ? Was “protons and heavy ions” too technical for the piece ? And where the hell is that 11,245 Hz figure coming from ? “Launching atomic particles 11,245 times a second” does not even make any sense. The right figure, however, is 40 million times a second. This time the mistake is by 3.55 orders of magnitude. Darn, the explanation suggested in the thread of the previous post does not even apply here. ## What the micro black hole fear mongering really is aboutJune 12, 2008 Posted by dorigo in personal, physics, science. Tags: ,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.54660964012146, "perplexity": 1929.9305840379084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00219-ip-10-180-206-219.ec2.internal.warc.gz"}
https://searxiv.org/search?author=Kyle%20Kremer
### Results for "Kyle Kremer" total 1684took 0.13s Accreting Double white dwarf binaries: Implications for LISAJul 04 2017Sep 12 2017We explore the long-term evolution of mass-transferring white dwarf binaries undergoing both direct-impact and disk accretion and explore implications of such systems to gravitational wave astronomy. We cover a broad range of initial component masses ... More In Search of the Thermal Eccentricity DistributionJan 31 2019About a century ago, Jeans (1919) discovered that if binary stars reach a state approximating energy equipartition, for example through many dynamical encounters that exchange energy, their eccentricity distribution can be described by : dN/de = 2e. This ... More Millisecond Pulsars and Black Holes in Globular ClustersFeb 15 2019Apr 18 2019Over a hundred millisecond radio pulsars (MSPs) have been observed in globular clusters (GCs), motivating theoretical studies of the formation and evolution of these sources through stellar evolution coupled to stellar dynamics. Here we study MSPs in ... More Long-term evolution of double white dwarf binaries accreting through direct impactFeb 21 2015Apr 22 2015We calculate the long-term evolution of angular momentum in double white dwarf binaries undergoing direct impact accretion over a broad range of parameter space. We allow the rotation rate of both components to vary, and account for the exchange of angular ... More Spin Tilts in the Double Pulsar Reveal Supernova Spin Angular-Momentum ProductionApr 26 2011Oct 30 2011The system PSR J0737-3039 is the only binary pulsar known to consist of two radio pulsars (PSR J0737-3039 A and PSR J0737-3039 B). This unique configuration allows measurements of spin orientation for both pulsars: pulsar A's spin is tilted from the orbital ... More Accreting black hole binaries in globular clustersSep 16 2017Oct 31 2017We explore the formation of mass-transferring binary systems containing black holes within globular clusters. We show that it is possible to form mass-transferring black hole binaries with main sequence, giant, and white dwarf companions with a variety ... More Low-mass X-ray binaries ejected from globular clustersFeb 13 2018We explore the population of mass-transferring binaries ejected from globular clusters (GCs) with both black hole (BH) and neutron star (NS) accretors. We use a set of 137 fully evolved globular cluster models which span a large range in cluster properties ... More Diffusion of relativistic gas mixtures in gravitational fieldsMar 26 2013A mixture of relativistic gases of non-disparate rest masses in a Schwarzschild metric is studied on the basis of a relativistic Boltzmann equation in the presence of gravitational fields. A BGK-type model equation of the collision operator of the Boltzmann ... More Relativistic Ohm and Fourier laws for binary mixtures of electrons with protons and photonsJul 17 2012Binary mixtures of electrons with protons and of electrons with photons subjected to external electromagnetic fields are analyzed by using the Anderson and Witting model equation. The relativistic laws of Ohm and Fourier are determined as well as general ... More Computer Simulations of charged systemsMar 26 2002In this brief contribution to the Proceedings of the NATO-ASI on Electrostatic Effects in Soft Matter and Biophysics'', which took place in Les Houches from Oct. 1-13, 2000, we summarize in short aspects of the simulations methods to study charged systems. ... More Analysis of Jeans instability from the Boltzmann equationAug 28 2015Oct 25 2016The dynamics of self-gravitating fluids is analyzed within the framework of a collisionless Boltzmann equation in the presence of gravitational fields and Poisson equation. Two cases are analyzed: a system with baryonic and dark matter in a static universe ... More Entropy, entropy flux and entropy supply rate of granular fluidsJan 14 2010Apr 05 2010The aim of this work is to analyze the entropy, entropy flux and entropy supply rate of granular fluids within the frameworks of the Boltzmann equation and continuum thermodynamics. It is shown that the entropy inequality for a granular gas that follows ... More Thermal Conductivity, Shear and Bulk Viscosities for a Relativistic Binary MixtureDec 13 2016In the present work, we deal with a binary mixture of diluted relativistic gases within the framework of the kinetic theory. The analysis is made within the framework of the Boltzmann equation. We assume that the gas is under the influence of an isotropic ... More Transport coefficients for relativistic gas mixtures of hard-sphere particlesDec 13 2016In the present work, we calculate the transport coefficients for a relativistic binary mixture of diluted gases of hard-sphere particles. The gas mixture under consideration is studied within the relativistic Boltzmann equation in the presence of a gravitational ... More A Contribution to the Theory Behind the M0 Capture-Recapture Model: An Improved EstimatorDec 10 2012Feb 17 2014We explore the use of a sufficient statistic based on the identified members that are obtained for samples that are selected under the $M_0$ capture-recapture closed population model (Schwarz and Seber, 1999). A Rao-Blackwellized version of the estimator ... More Lattice-induced non-adiabatic frequency shifts in optical lattice clocksAug 30 2010We consider the frequency shift in optical lattice clocks which arises from the coupling of the electronic motion to the atomic motion within the lattice. For the simplest of 3-D lattice geometries this coupling is shown to only affect clocks based on ... More Explicit Salem sets and applications to metrical Diophantine approximationApr 01 2016Let $Q$ be an infinite subset of $\mathbb{Z}$, let $\Psi: \mathbb{Z} \rightarrow [0,\infty)$ be positive on $Q$, and let $\theta \in \mathbb{R}$. Define $$E(Q,\Psi,\theta) = \{ x \in \mathbb{R} : \| q x - \theta \| \leq \Psi(q) \text{ for infinitely ... More Statistical Challenges for Searches for New Physics at the LHCNov 03 2005Jan 04 2006Because the emphasis of the LHC is on 5 sigma discoveries and the LHC environment induces high systematic errors, many of the common statistical procedures used in High Energy Physics are not adequate. I review the basic ingredients of LHC searches, the ... More A Generalization of Plexes of Latin SquaresAug 01 2010A k-plex of a latin square is a collection of cells representing each row, column, and symbol precisely k times. The classic case of k=1 is more commonly known as a transversal. We introduce the concept of a k-weight, an integral weight function ... More Real-Time Stochastic Predictive Control for Hybrid Vehicle Energy ManagementApr 23 2018This work presents three computational methods for real time energy management in a hybrid hydraulic vehicle (HHV) when driver behavior and vehicle route are not known in advance. These methods, implemented in a receding horizon control (aka model predictive ... More Conformal dimension and boundaries of planar domainsJul 16 2015Jun 14 2016Building off of techniques that were recently developed by M. Carrasco, S. Keith, and B. Kleiner to study the conformal dimension of boundaries of hyperbolic groups, we prove that uniformly perfect boundaries of John domains in the Riemann sphere have ... More Discreteness and Large Scale SurjectionsAug 12 2015We study the concept of coarse disjointness and large scale n-to-1 functions. As a byproduct, we obtain an Ostrand-type characterization of asymptotic dimension for coarse structures. It is shown that properties like finite asymptotic dimension, coarse ... More Upward and Downward Runs on Partially Ordered SetsOct 13 2008Apr 06 2010We consider Markov chains on partially ordered sets that generalize the success-runs and remaining life chains in reliability theory. We find conditions for recurrence and transience and give simple expressions for the invariant distributions. We study ... More A sufficient condition for finiteness of Frobenius test exponentsSep 26 2018Apr 18 2019The Frobenius test exponent \operatorname{Fte}(R) of a local ring (R,\mathfrak{m}) of prime characteristic p > 0 is the smallest e_0 \in \mathbb{N} such that for every ideal \mathfrak{q} generated by a (full) system of parameters, the Frobenius ... More Moment Independent Expansion for Fourth-Order Corrections in Lattice Boltzmann MethodsOct 30 2017Jan 17 2018A expansion to fourth-order for lattice Boltzmann methods is presented. This expansion provides an easy model for finding fourth-order corrections to lattice Boltzmann methods for various physical systems. The fourth-order terms can give rise to improved ... More Complex Random Matrices have no Real EigenvaluesSep 24 2016Oct 08 2017Let \zeta = \xi + i\xi' where \xi, \xi' are iid copies of a mean zero, variance one, subgaussian random variable. Let N_n be a n \times n random matrix with entries that are iid copies of \zeta. We prove that there exists a c \in (0,1) such ... More Loewner chains and Hölder geometryOct 21 2014Feb 22 2016The Loewner equation provides a correspondence between continuous real-valued functions \lambda_t and certain increasing families of half-plane hulls K_t. In this paper we study the deterministic relationship between specific analytic properties of ... More Lower bounds for codimension-1 measure in metric manifoldsFeb 20 2016We establish Euclidean-type lower bounds for the codimension-1 Hausdorff measure of sets that separate points in doubling and linearly locally contractible metric manifolds. This gives a quantitative topological isoperimetric inequality in the setting ... More Quark Matter Induced Extensive Air ShowersNov 15 2010May 03 2011If the dark matter of our galaxy is composed of nuggets of quarks or antiquarks in a colour superconducting phase there will be a small but non-zero flux of these objects through the Earth's atmosphere. A nugget of quark matter will deposit only a small ... More Complex Random Matrices have no Real EigenvaluesSep 24 2016Let \zeta = \xi + i\xi' where \xi, \xi' are iid copies of a mean zero, variance one, subgaussian random variable. Let N_n be a n \times n random matrix with iid entries \zeta_{ij} = \zeta. We prove that there exists a c \in (0,1) such that ... More Shortest Non-trivial Cycles in Directed and Undirected Surface GraphsNov 29 2011Sep 19 2012Let G be a graph embedded on a surface of genus g with b boundary cycles. We describe algorithms to compute multiple types of non-trivial cycles in G, using different techniques depending on whether or not G is an undirected graph. If G is undirected, ... More Upper Bounds for Maximally Greedy Binary Search TreesFeb 24 2011Apr 29 2011At SODA 2009, Demaine et al. presented a novel connection between binary search trees (BSTs) and subsets of points on the plane. This connection was independently discovered by Derryberry et al. As part of their results, Demaine et al. considered GreedyFuture, ... More 2^3 Quantified Boolean Formula Games and Their ComplexitiesJan 15 2014Dec 29 2014Consider QBF, the Quantified Boolean Formula problem, as a combinatorial game ruleset. The problem is rephrased as determining the winner of the game where two opposing players take turns assigning values to boolean variables. In this paper, three common ... More An ergodic algorithm for generating knots with a prescribed injectivity radiusMar 09 2016May 08 2017The first algorithm for sampling the space of thick equilateral knots, as a function of thickness, will be described. This algorithm is based on previous algorithms of applying random reflections. To prove the existence of the algorithm, we describe a ... More A Contribution to the Theory Behind the M0 Capture-Recapture Model: An Improved EstimatorDec 10 2012Nov 28 2018We explore the use of a sufficient statistic based on the identified members that are obtained for samples that are selected under the M_0 capture-recapture closed population model (Schwarz and Seber, 1999). A Rao-Blackwellized version of the estimator ... More Rao-Blackwellization to give Improved Estimates in Multi-List StudiesSep 26 2017Sufficient statistics are derived for the population size and parameters of commonly used closed population mark-recapture models. Rao-Blackwellization details for improving estimators that are not functions of the statistics are presented. As Rao-Blackwellization ... More Recent Advances on Estimating Population Size with Link-Tracing SamplingSep 22 2017A new approach to estimate population size based on a stratified link-tracing sampling design is presented. The method extends on the Frank and Snijders (1994) approach by allowing for heterogeneity in the initial sample selection procedure. Rao-Blackwell ... More Gallai MultigraphsJun 30 2007A complete edge-colored graph or multigraph is called Gallai if it lacks rainbow triangles. We give a construction of all finite Gallai multigraphs. Primes from sums of two squares and missing digitsJun 07 2018Let \mathcal{A}' be the set of integers missing any three fixed digits from their decimal expansion. We produce primes in a thin sequence by proving an asymptotic formula for counting primes of the form p = m^2 + \ell^2, with \ell \in \mathcal{A}'. ... More Rigidity for Quasi-Möbius Actions on Fractal Metric SpacesAug 02 2013Jan 15 2016In \cite{BK02}, M. Bonk and B. Kleiner proved a rigidity theorem for expanding quasi-M\"obius group actions on Ahlfors n-regular metric spaces with topological dimension n. This led naturally to a rigidity result for quasi-convex geometric actions ... More Discrete length-volume inequalities and lower volume bounds in metric spacesOct 21 2014Feb 22 2016A theorem of W. Derrick ensures that the volume of any Riemannian cube ([0,1]^n,g) is bounded below by the product of the distances between opposite codimension-1 faces. In this paper, we establish a discrete analog of Derrick's inequality for weighted ... More Legendrian ribbons and strongly quasipositive links in an open bookOct 17 2017We show that a link in an open book can be realized as a strongly quasipositive braid if and only if it bounds a Legendrian ribbon with respect to the associated contact structure. This generalizes a result due to Baader and Ishikawa for links in the ... More Characterizing accreting double white dwarf binaries with the Laser Interferometer Space Antenna and GaiaOct 23 2017Apr 30 2018We demonstrate a method to fully characterize mass-transferring double white dwarf (DWD) systems with a helium-rich (He) WD donor based on the mass--radius relationship for He WDs. Using a simulated Galactic population of DWDs, we show that donor and ... More Tidal Disruptions of Stars by Black Hole Remnants in Dense Star ClustersApr 12 2019In a dense stellar environment, such as the core of a globular cluster (GC), dynamical interactions with black holes (BHs) are expected to lead to a variety of astrophysical transients. Here we explore tidal disruption events (TDEs) of stars by stellar-mass ... More How initial size governs core collapse in globular clustersAug 07 2018Dec 03 2018Globular clusters (GCs) in the Milky Way exhibit a well-observed bimodal distribution in core radii separating the so-called "core-collapsed" and "non-core-collapsed" clusters. Here, we use our H\'enon-type Monte Carlo code, CMC, to explore initial cluster ... More LISA sources in Milky Way globular clustersFeb 15 2018May 30 2018We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match ... More How black holes shape globular clusters: Modeling NGC 3201Feb 26 2018Numerical simulations have shown that black holes (BHs) can strongly influence the evolution and present-day observational properties of globular clusters (GCs). Using a Monte Carlo code, we construct GC models that match the Milky Way (MW) cluster NGC ... More Stable Adiabatic Times For A Continuous Evolution Of Markov ChainsJul 22 2015This paper continues the discussion on the stability of time-inhomogeneous Markov chains. In particular, this paper defines a time-inhomogeneous, discrete-time Markov chain governed by a continuous evolution in the appropriate martrix space. This matrix ... More Digital Arroyos: An Examination of State Policy and Regulated Market Boundaries in Constructing Rural Internet AccessSep 25 2001This focused study on state-level policy and access patterns contributes to a fuller understanding of how these invisible barriers work to structure access and define rural communities. Combining both quantitative and qualitative data, this study examines ... More "Cosmic Rays" from Quark MatterJun 04 2010Nov 30 2010I describe a dark matter candidate based in qcd physics in which the dark matter is composed of macroscopically large "nuggets" of quark and anti-quark matter. These objects may have a sufficiently massive low number density to avoid constraints from ... More A sufficient condition for finiteness of Frobenius test exponentsSep 26 2018Nov 05 2018The Frobenius test exponent \operatorname{Fte}(R) of a local ring (R,\mathfrak{m}) of prime characteristic p > 0 is the smallest e_0 \in \mathbb{N} such that for every ideal \mathfrak{q} generated by a (full) system of parameters, the Frobenius ... More State Classification of Cooking Objects Using a VGG CNNApr 21 2019In machine learning, it is very important for a robot to know the state of an object and recognize particular desired states. This is an image classification problem that can be solved using a convolutional neural network. In this paper, we will discuss ... More Quasipositive links and Stein surfacesMar 29 2017Apr 28 2017We study the generalization of quasipositive links from the three-sphere to arbitrary closed, orientable three-manifolds. As in the classical case, we see that this generalization of quasipositivity is intimately connected to contact and complex geometry. ... More Millisecond Pulsars and Black Holes in Globular ClustersFeb 15 2019Over a hundred millisecond radio pulsars (MSPs) have been observed in globular clusters (GCs), motivating theoretical studies of the formation and evolution of these sources through stellar evolution coupled to stellar dynamics. Here we study MSPs in ... More Jif: Language-based Information-flow Security in JavaDec 30 2014In this report, we examine Jif, a Java extension which augments the language with features related to security. Jif adds support for security labels to Java's type system such that the developer can specify confidentiality and integrity policies to the ... More An ergodic algorithm for generating knots with a prescribed injectivity radiusMar 09 2016The first algorithm for sampling the space of thick equilateral knots, as a function of thickness, will be described. This algorithm is based on previous algorithms of applying random reflections. To prove the existence of the algorithm, we describe a ... More Atmospheric Radio Signals From Galactic Dark MatterJul 31 2012Sep 17 2013If the dark matter of our galaxy is composed of nuggets of quarks or antiquarks in a colour superconducting phase there will be a small but non-zero flux of these objects through the Earth's atmosphere. A nugget of quark matter will deposit only a small ... More Explicit Salem Sets in \mathbb{R}^2May 26 2016Sep 30 2016We construct explicit (i.e., non-random) examples of Salem sets in \mathbb{R}^2 of dimension s for every 0 \leq s \leq 2. In particular, we give the first explicit examples of Salem sets in \mathbb{R}^2 of dimension 0 < s < 1. This extends a ... More Loop-Erased Random SurfacesNov 16 2015Jul 13 2016Loop-erased random walk and it's scaling limit, Schramm--Loewner evolution, have found numerous applications in mathematics and physics. We present a 2 dimensional analogue of LERW, the loop erased random surface. We do this by defining a 2 dimensional ... More On the Sobolev stability threshold of 3D Couette flow in a homogeneous magnetic fieldDec 30 2018We study the stability of the Couette flow (y,0,0)^T in the 3D incompressible magnetohydrodynamic (MHD) equations for a conducting fluid on \mathbb{T} \times \mathbb{R} \times \mathbb{T} in the presence of a homogeneous magnetic field \alpha(\sigma, ... More A Contribution to the Theory Behind the Capture-Recapture M0 Model: An Improved EstimatorNov 21 2012Nov 28 2018We explore the use of a sufficient statistic based on the data of samples that are selected under the M_0 capture-recapture closed population model (Schwarz and Seber, 1999). A Rao-Blackwellized version of the estimator based on a sufficient statistic ... More Lie Algebroid Gauging of Non-linear Sigma ModelsMay 02 2019This paper examines a proposal for gauging non-linear sigma models with respect to a Lie algebroid action. The general conditions for gauging a non-linear sigma model with a set of involutive vector fields are given. We show that it is always possible ... More Equivariant Solutions to a System of Nonlinear Wave Equations with Ginzburg-Landau Type PotentialJan 11 2016It is known that there exist solutions with interfaces to various scalar nonlinear wave equations. In this paper, we look for solutions of a two-component system of nonlinear wave equations where one of the components has an interface and and where the ... More A Language for Function Signature RepresentationsMar 31 2018Apr 18 2018Recent work by (Richardson and Kuhn, 2017a,b; Richardson et al., 2018) looks at semantic parser induction and question answering in the domain of source code libraries and APIs. In this brief note, we formalize the representations being learned in these ... More Constant Rate Distributions on Partially Ordered SetsOct 13 2008Apr 06 2010We consider probability distributions with constant rate on partially ordered sets, generalizing distributions in the usual reliability setting that have constant failure rate. In spite of the minimal algebraic structure, there is a surprisingly rich ... More Lower bounds for codimension-1 measure in metric manifoldsFeb 20 2016Oct 21 2016We establish Euclidean-type lower bounds for the codimension-1 Hausdorff measure of sets that separate points in doubling and linearly locally contractible metric manifolds. This gives a quantitative topological isoperimetric inequality in the setting ... More Cross-sections of unknotted ribbon disks and algebraic curvesJan 16 2018Feb 28 2018We resolve parts (A) and (B) of Problem 1.100 from Kirby's list by showing that many nontrivial links arise as cross-sections of unknotted holomorphic disks in the four-ball. The techniques can be used to produce unknotted ribbon surfaces with prescribed ... More Minimal braid representatives of quasipositive linksMay 05 2016We show that every quasipositive link has a quasipositive minimal braid representative, partially resolving a question posed by Orevkov. These quasipositive minimal braids are used to show that the maximal self-linking number of a quasipositive link is ... More A multi-resolution model to capture both global fluctuations of an enzyme and molecular recognition in the ligand-binding siteNov 02 2016In multi-resolution simulations, different system components are simultaneously modelled at different levels of resolution, these being smoothly coupled together. In the case of enzyme systems, computationally expensive atomistic detail is needed in the ... More Chaplygin Gas of Tachyon Nature Imposed by Symmetry and Constrained via H(z) DataMay 03 2015Sep 10 2015An action of general form is proposed for a Universe containing matter, radiation and dark energy. The latter is interpreted as a tachyon field non-minimally coupled to the scalar curvature. The Palatini approach is used when varying the action so the ... More Fokker-Planck type equations for a simple gas and for a semi-relativistic Brownian motion from a relativistic kinetic theoryJul 13 2007A covariant Fokker-Planck type equation for a simple gas and an equation for the Brownian motion are derived from a relativistic kinetic theory based on the Boltzmann equation. For the simple gas the dynamic friction four-vector and the diffusion tensor ... More Bulk viscous cosmological model with interacting dark fluidsSep 23 2011The objective of the present work is to study a cosmological model for a spatially flat Universe whose constituents are a dark energy field and a matter field which includes baryons and dark matter. The constituents are supposed to be in interaction and ... More Statistical mechanics of double-stranded semi-flexible polymersMay 09 1997Jan 07 1998We study the statistical mechanics of double-stranded semi-flexible polymers using both analytical techniques and simulation. We find a transition at some finite temperature, from a type of short range order to a fundamentally different sort of short ... More Grad's moment method for relativistic gas mixtures of Maxwellian particlesJan 14 2013Mixtures of relativistic gases are analyzed within the framework of Boltzmann equation by using Grad's moment method. A relativistic mixture of r constituent is characterized by the moments of the distribution function: particle four-flows, energy-momentum ... More Dynamic Tags for Security ProtocolsMay 12 2014Jun 17 2014The design and verification of cryptographic protocols is a notoriously difficult task, even in symbolic models which take an abstract view of cryptography. This is mainly due to the fact that protocols may interact with an arbitrary attacker which yields ... More Coupling atomistic and continuum hydrodynamics through a mesoscopic model: application to liquid waterAug 04 2009We have conducted a triple-scale simulation of liquid water by concurrently coupling atomistic, mesoscopic, and continuum models of the liquid. The presented triple-scale hydrodynamic solver for molecular liquids enables the insertion of large molecules ... More Consistent interpretation of molecular simulation kinetics using Markov state models biased with external informationFeb 11 2016Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and ... More Charge inversion in colloidal systemsJan 08 2002We investigate spherical macroions in the strong Coulomb coupling regime within the primitive model in salt-free environment. Molecular dynamics (MD) simulations are used to elucidate the effect of discrete macroion charge distribution on charge inversion. ... More Conformation of a Polyelectrolyte Complexed to a Like-Charged ColloidNov 19 2001We report results from a molecular dynamics (MD) simulation on the conformations of a long flexible polyelectrolyte complexed to a charged sphere, \textit{both negatively charged}, in the presence of neutralizing counterions in the strong Coulomb coupling ... More Relative Resolution: A Multipole Approximation at Appropriate DistancesSep 17 2018Recently, we introduced Relative Resolution as a hybrid formalism for fluid mixtures [1]. The essence of this approach is that it switches molecular resolution in terms or relative separation: While nearest neighbors are characterized by a detailed fine-grained ... More On the Inductive Bias of Word-Character-Level Multi-Task Learning for Speech RecognitionNov 28 2018End-to-end automatic speech recognition (ASR) commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate (WER). This suggests that predicting sequences of words directly may be helpful ... More Moments and ergodicity of the jump-diffusion CIR processSep 04 2017Jan 19 2018We study the jump-diffusion CIR process, which is an extension of the Cox-Ingersoll-Ross model and whose jumps are introduced by a subordinator. We provide sufficient conditions on the L\'evy measure of the subordinator under which the jump-diffusion ... More Ultrashort spatiotemporal optical solitons in quadratic nonlinear media: Generation of line and lump solitons from few-cycle input pulsesApr 11 2011By using a powerful reductive perturbation technique, or a multiscale analysis, a generic Kadomtsev-Petviashvili evolution equation governing the propagation of femtosecond spatiotemporal optical solitons in quadratic nonlinear media beyond the slowly ... More Adaptive molecular resolution via a continuous change of the phase space dimensionalitySep 01 2006For the study of complex synthetic and biological molecular systems by computer simulations one is still restricted to simple model systems or to by far too small time scales. To overcome this problem multiscale techniques are being developed for many ... More Irreversible Processes in Inflationary Cosmological ModelsAug 15 2002By using the thermodynamic theory of irreversible processes and Einstein general relativity, a cosmological model is proposed where the early universe is considered as a mixture of a scalar field with a matter field. The scalar field refers to the inflaton ... More Non-minimally coupled tachyon field with Noether symmetry under the Palatini approachNov 13 2014A model for a homogeneous, isotropic, flat Universe composed by dark energy and matter is investigated. Dark energy is considered to behave as a tachyon field, which is non-minimally coupled to gravity. The connection is treated as metric independent ... More Palatini approach to 1/R gravity and its implications to the late UniverseApr 19 2004By applying the Palatini approach to the 1/R-gravity model it is possible to explain the present accelerated expansion of the Universe. Investigation of the late Universe limiting case shows that: (i) due to the curvature effects the energy-momentum tensor ... More Polymorphism of syndiotactic polystyrene crystals from multiscale simulationsMay 25 2018Syndiotactic polystyrene (sPS) exhibits complex polymorphic behavior upon crystallization. Computational modeling of polymer crystallization has remained a challenging task because the relevant processes are slow on the molecular time scale. We report ... More Relative Resolution: A Hybrid Formalism for Fluid MixturesMar 12 2019We show here that molecular resolution is inherently hybrid in terms of relative separation: If molecules are close to each other, they must be characterized by a fine-grained (geometrically detailed) model, yet if molecules are far from each other, they ... More Post-Newtonian Dynamics in Dense Star Clusters: Formation, Masses, and Merger Rates of Highly-Eccentric Black Hole BinariesNov 12 2018Nov 15 2018Using state-of-the-art dynamical simulations of globular clusters, including radiation reaction during black hole encounters and a cosmological model of star cluster formation, we create a realistic population of dynamically-formed binary black hole mergers ... More Probing the Black Hole Merger History in Clusters using Stellar Tidal DisruptionsJan 09 2019The dynamical assembly of binary black holes (BBHs) in dense star clusters (SCs) is one of the most promising pathways for producing observable gravitational wave (GW) sources, however several other formation scenarios likely operate as well. One of the ... More Thermodynamics of a Simple Three-Dimensional DNA Hairpin ModelJun 23 2016We characterize the equation of state for a simple three-dimensional DNA hairpin model using a Metropolis Monte Carlo algorithm. This algorithm was run at constant temperature and fixed separation between the terminal ends of the strand. From the equation ... More A Geometric Consideration of the Erdős-Straus ConjectureNov 13 2014Dec 08 2014In this paper we will explore the solutions to the diophantine equation in the Erd\H{o}s-Straus conjecture. For a prime p we are discussing the relationship between the values x,y,z \in \mathbb{N} so that$$ \frac{4}{p} = \frac{1}{x} + \frac{1}{y} ... More Spontaneous supersymmetry breaking in the two-dimensional N=1 Wess-Zumino modelOct 24 2014We study the phase diagram of the two-dimensional N=1 Wess-Zumino model on the lattice using Wilson fermions and the fermion loop formulation. We give a complete nonperturbative determination of the ground state structure in the continuum and infinite ... More RECAST: Extending the Impact of Existing AnalysesOct 12 2010Searches for new physics by experimental collaborations represent a significant investment in time and resources. Often these searches are sensitive to a broader class of models than they were originally designed to test. We aim to extend the impact of ... More Frequentist Hypothesis Testing with Background UncertaintyOct 22 2003We consider the standard Neyman-Pearson hypothesis test of a signal-plus-background hypothesis and background-only hypothesis in the presence of uncertainty on the background-only prediction. Surprisingly, this problem has not been addressed in the recent ... More Knot concordance and homology sphere groupsMay 25 2016We study two homomorphisms to the rational homology sphere group. If $\beta$ denotes the homomorphism from the knot concordance group $\mathcal{C}$ defined by taking double branched covers of knots, we show that the kernel of $\beta$ is infinitely generated ... More 511 keV Line Emission from Nearby Spherical Dwarf GalaxiesNov 17 2016The observed galactic 511 keV line has been interpreted in a number of papers as a possible signal of dark matter annihilation within the galactic bulge. If this is the case then we should expect a similar spectral feature associated with nearby dwarf ... More Dictionary Learning with Few Samples and Matrix ConcentrationMar 30 2015Let $A$ be an $n \times n$ matrix, $X$ be an $n \times p$ matrix and $Y = AX$. A challenging and important problem in data analysis, motivated by dictionary learning and other practical problems, is to recover both $A$ and $X$, given $Y$. Under normal ... More Trackability with Imprecise LocalizationDec 19 2013Imagine a tracking agent $P$ who wants to follow a moving target $Q$ in $d$-dimensional Euclidean space. The tracker has access to a noisy location sensor that reports an estimate $\tilde{Q}(t)$ of the target's true location $Q(t)$ at time $t$, where ... More
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198836088180542, "perplexity": 1572.58078140658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00086.warc.gz"}
http://tex.stackexchange.com/questions/54003/setting-acronyms-to-appear-as-a-footnotes-in-margin-cause-marginpar-moved-warnin
# Setting acronyms to appear as a footnotes in margin cause Marginpar moved warning I am using glossaries package with the acronym and footnote option. I also set footnotes to appear in the margin. With multiple acronyms in a line, multiple footnotes are placed in the margin. To avoid overlap, the notes are moved which causes Marginpar moved warning. Is there a way to prevent Marginpar warnings when multiple notes are generated in the margin corresponding to content from a single line? - Your question does not provide your technical solution which leads to the warnings. That said: give the marginfix-package a try (\usepackage{marginfix} before \begin{document}). - marginfix did indeed fix it! I didn't think my problem was covered by the fix, but it seems to have done the trick. Thanks! –  Dave May 1 '12 at 20:27 You just want to remove the warning, not stop them moving? If that is what you meant then: \makeatletter The silence package also has this "muting" capability. –  Werner May 1 '12 at 20:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7489802241325378, "perplexity": 3391.3153299091364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657126053.45/warc/CC-MAIN-20140914011206-00328-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://worldwidescience.org/topicpages/i/include+energy+resolved.html
#### Sample records for include energy resolved 1. Analysis of electronic models for solar cells including energy resolved defect densities Energy Technology Data Exchange (ETDEWEB) Glitzky, Annegret 2010-07-01 We introduce an electronic model for solar cells including energy resolved defect densities. The resulting drift-diffusion model corresponds to a generalized van Roosbroeck system with additional source terms coupled with ODEs containing space and energy as parameters for all defect densities. The system has to be considered in heterostructures and with mixed boundary conditions from device simulation. We give a weak formulation of the problem. If the boundary data and the sources are compatible with thermodynamic equilibrium the free energy along solutions decays monotonously. In other cases it may be increasing, but we estimate its growth. We establish boundedness and uniqueness results and prove the existence of a weak solution. This is done by considering a regularized problem, showing its solvability and the boundedness of its solutions independent of the regularization level. (orig.) 2. Energy-resolved positron annihilation for molecules International Nuclear Information System (INIS) Barnes, L.D.; Gilbert, S.J.; Surko, C.M. 2003-01-01 This paper presents an experimental study designed to address the long-standing question regarding the origin of very large positron annihilation rates observed for many molecules. We report a study of the annihilation, resolved as a function of positron energy (ΔE∼25 meV, full width at half maximum) for positron energies from 50 meV to several eV. Annihilation measurements are presented for a range of hydrocarbon molecules, including a detailed study of alkanes, C n H 2n+2 , for n=1-9 and 12. Data for other molecules are also presented: C 2 H 2 , C 2 H 4 ; CD 4 ; isopentane; partially fluorinated and fluorinated methane (CH x F 4-x ); 1-fluorohexane (C 6 H 13 F) and 1-fluorononane (C 9 H 19 F). A key feature of the results is very large enhancements in the annihilation rates at positron energies corresponding to the excitation of molecular vibrations in larger alkane molecules. These enhancements are believed to be responsible for the large annihilation rates observed for Maxwellian distributions of positrons in molecular gases. In alkane molecules larger than ethane (C 2 H 6 ), the position of these peaks is shifted downward by an amount ∼20 meV per carbon. The results presented here are generally consistent with a physical picture recently considered in detail by Gribakin [Phys. Rev. A 61, 022720 (2000)]. In this model, the incoming positron excites a vibrational Feshbach resonance and is temporarily trapped on the molecule, greatly enhancing the probability of annihilation. The applicability of this model and the resulting enhancement in annihilation rate relies on the existence of positron-molecule bound states. In accord with this reasoning, the experimental results presented here provide the most direct evidence to date that positrons bind to neutral molecules. The shift in the position of the resonances is interpreted as a measure of the binding energy of the positron to the molecule. Other features of the results are also discussed, including large 3. Computed tomography with energy-resolved detection: a feasibility study Science.gov (United States) 2008-03-01 The feasibility of computed tomography (CT) with energy-resolved x-ray detection has been investigated. A breast CT design with multi slit multi slice (MSMS) data acquisition was used for this study. The MSMS CT includes linear arrays of photon counting detectors separated by gaps. This CT configuration allows for efficient scatter rejection and 3D data acquisition. The energy-resolved CT images were simulated using a digital breast phantom and the design parameters of the proposed MSMS CT. The phantom had 14 cm diameter and 50/50 adipose/glandular composition, and included carcinoma, adipose, blood, iodine and CaCO3 as contrast elements. The x-ray technique was 90 kVp tube voltage with 660 mR skin exposure. Photon counting, charge (energy) integrating and photon energy weighting CT images were generated. The contrast-to-noise (CNR) improvement with photon energy weighting was quantified. The dual energy subtracted images of CaCO3 and iodine were generated using a single CT scan at a fixed x-ray tube voltage. The x-ray spectrum was electronically split into low- and high-energy parts by a photon counting detector. The CNR of the energy weighting CT images of carcinoma, blood, adipose, iodine, and CaCO3 was higher by a factor of 1.16, 1.20, 1.21, 1.36 and 1.35, respectively, as compared to CT with a conventional charge (energy) integrating detector. Photon energy weighting was applied to CT projections prior to dual energy subtraction and reconstruction. Photon energy weighting improved the CNR in dual energy subtracted CT images of CaCO3 and iodine by a factor of 1.35 and 1.33, respectively. The combination of CNR improvements due to scatter rejection and energy weighting was in the range of 1.71-2 depending on the type of the contrast element. The tilted angle CZT detector was considered as the detector of choice. Experiments were performed to test the effect of the tilting angle on the energy spectrum. Using the CZT detector with 20° tilting angle decreased the 4. Spatially resolved X-ray energy analysis International Nuclear Information System (INIS) Aronson, M.; Horowitz, P. 1981-01-01 We have constructed a proton-induced X-ray emission (PIXE) analysis system that performs one- or two-dimensional scans of a sample and stores energy spectra at each point for later analysis. This system permits examination of the spectra or the spatial distribution of a selected element as data is being gathered, and allows versatile imaging and graphing analysis later. The boundaries of the region under study can easily be altered, both for one-dimensional line scans and two-dimensional rasters. Thy system includes provisions for beam-current normalization and baseline removal. (orig.) 5. Optimal ''image-based'' weighting for energy-resolved CT International Nuclear Information System (INIS) Schmidt, Taly Gilat 2009-01-01 This paper investigates a method of reconstructing images from energy-resolved CT data with negligible beam-hardening artifacts and improved contrast-to-nosie ratio (CNR) compared to conventional energy-weighting methods. Conceptually, the investigated method first reconstructs separate images from each energy bin. The final image is a linear combination of the energy-bin images, with the weights chosen to maximize the CNR in the final image. The optimal weight of a particular energy-bin image is derived to be proportional to the contrast-to-noise-variance ratio in that image. The investigated weighting method is referred to as ''image-based'' weighting, although, as will be described, the weights can be calculated and the energy-bin data combined prior to reconstruction. The performance of optimal image-based energy weighting with respect to CNR and beam-hardening artifacts was investigated through simulations and compared to that of energy integrating, photon counting, and previously studied optimal ''projection-based'' energy weighting. Two acquisitions were simulated: dedicated breast CT and a conventional thorax scan. The energy-resolving detector was simulated with five energy bins. Four methods of estimating the optimal weights were investigated, including task-specific and task-independent methods and methods that require a single reconstruction versus multiple reconstructions. Results demonstrated that optimal image-based weighting improved the CNR compared to energy-integrating weighting by factors of 1.15-1.6 depending on the task. Compared to photon-counting weighting, the CNR improvement ranged from 1.0 to 1.3. The CNR improvement factors were comparable to those of projection-based optimal energy weighting. The beam-hardening cupping artifact increased from 5.2% for energy-integrating weighting to 12.8% for optimal projection-based weighting, while optimal image-based weighting reduced the cupping to 0.6%. Overall, optimal image-based energy weighting 6. Energy-resolved computed tomography: first experimental results International Nuclear Information System (INIS) 2008-01-01 First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied 7. Detectors for Energy-Resolved Fast Neutron Imaging OpenAIRE Dangendorf, V.; Breskin, A.; Chechik, R.; Feldman, G.; Goldberg, M. B.; Jagutzki, O.; Kersten, C.; Laczko, G.; Mor, I.; Spillman, U.; Vartsky, D. 2004-01-01 Two detectors for energy-resolved fast-neutron imaging in pulsed broad-energy neutron beams are presented. The first one is a neutron-counting detector based on a solid neutron converter coupled to a gaseous electron multiplier (GEM). The second is an integrating imaging technique, based on a scintillator for neutron conversion and an optical imaging system with fast framing capability. 8. Solar Energy Education. Renewable energy: a background text. [Includes glossary Energy Technology Data Exchange (ETDEWEB) 1985-01-01 Some of the most common forms of renewable energy are presented in this textbook for students. The topics include solar energy, wind power hydroelectric power, biomass ocean thermal energy, and tidal and geothermal energy. The main emphasis of the text is on the sun and the solar energy that it yields. Discussions on the sun's composition and the relationship between the earth, sun and atmosphere are provided. Insolation, active and passive solar systems, and solar collectors are the subtopics included under solar energy. (BCS) 9. Resolving runaway electron distributions in space, time, and energy Science.gov (United States) Paz-Soldan, C.; Cooper, C. M.; Aleynikov, P.; Eidietis, N. W.; Lvovskiy, A.; Pace, D. C.; Brennan, D. P.; Hollmann, E. M.; Liu, C.; Moyer, R. A.; Shiraki, D. 2018-05-01 Areas of agreement and disagreement with present-day models of runaway electron (RE) evolution are revealed by measuring MeV-level bremsstrahlung radiation from runaway electrons (REs) with a pinhole camera. Spatially resolved measurements localize the RE beam, reveal energy-dependent RE transport, and can be used to perform full two-dimensional (energy and pitch-angle) inversions of the RE phase-space distribution. Energy-resolved measurements find qualitative agreement with modeling on the role of collisional and synchrotron damping in modifying the RE distribution shape. Measurements are consistent with predictions of phase-space attractors that accumulate REs, with non-monotonic features observed in the distribution. Temporally resolved measurements find qualitative agreement with modeling on the impact of collisional and synchrotron damping in varying the RE growth and decay rate. Anomalous RE loss is observed and found to be largest at low energy. Possible roles for kinetic instability or spatial transport to resolve these anomalies are discussed. 10. Modern electron microscopy resolved in space, energy and time Science.gov (United States) Carbone, F. 2011-06-01 Recent pioneering experiments combining ultrafast lasers with electron-based technology demonstrated the possibility to obtain real-time information about chemical bonds and their dynamics during reactions and phase transformation. These techniques have been successfully applied to several states of matter including gases, liquids, solids and biological samples showing a unique versatility thanks to the high sensitivity of electrons to tiny amounts of material and their low radiation damage. A very powerful tool, the time-resolved Transmission Electron Microscope (TEM), is capable of delivering information on the structure of ordered and disordered matter through diffraction and imaging, with a spatial resolution down to the atomic limit (10-10 m); the same apparatus can distinguish dynamical phenomena happening on the time-scales between fs and ms, with a dynamic range of 12 orders of magnitude. At the same time, spectroscopic information can be obtained from the loss of kinetic energy of electrons interacting with specimens in the range of interband transitions and plasmons in solids, or charge transfers in molecules, all the way up to the atomic core levels with the same time-resolution. In this contribution, we focus on the recent advances in fs Electron Energy Loss Spectroscopy (FEELS), discussing the main results and their implications for future studies. 11. Including environmental concerns in energy policies International Nuclear Information System (INIS) Potier, Michel 2014-05-01 In this article, the author comments the different impacts on the environment and risks related to energy, provided that all energies have an impact on the environment (renewable energies are generally cleaner than fossil energies) and these impacts can be on human health, ecosystems, buildings, crops, landscapes, and climate change. He comments the efforts made in the search for a higher energetic efficiency, and proposes an overview of the various available tools implemented by environmental policies in the energy sector: regulatory instruments, economic instruments, negotiated agreements, and informational instruments. He comments the implementation of an energetic taxing aimed at developing a greater respect of the environment 12. The Dark Energy Survey: Prospects for resolved stellar populations Energy Technology Data Exchange (ETDEWEB) Rossetto, Bruno M. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Santiago, Basílio X. [Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Instituto de Fisica, Porto Alegre (Brazil); Girardi, Léo [Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Osservatorio Astronomica di Padova-INAF, Padova (Italy); Camargo, Julio I. B. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Balbinot, Eduardo [Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Instituto de Fisica, Porto Alegre (Brazil); da Costa, Luiz N. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Yanny, Brian [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Maia, Marcio A. G. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Makler, Martin [Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro (Brazil); Ogando, Ricardo L. C. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Pellegrini, Paulo S. [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Ramos, Beatriz [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); de Simoni, Fernando [Observatorio Nacional, Rio de Janeiro (Brazil); Lab. Interinstitucional de e-Astronomia-LIneA, Rio de Janeiro (Brazil); Armstrong, R. [Univ. of Illinois, Urbana, IL (United States); Bertin, E. [Univ. Pierre et Marie Curie, Paris (France); Desai, S. [Univ. of Illinois, Urbana, IL (United States); Kuropatkin, N. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Lin, H. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mohr, J. J. [Max-Planck-Institut fur extraterrestrische Physik, Garching (Germany); Tucker, D. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) 2011-05-06 Wide angle and deep surveys, regardless of their primary purpose, always sample a large number of stars in the Galaxy and in its satellite system. We here make a forecast of the expected stellar sample resulting from the Dark Energy Survey and the perspectives that it will open for studies of Galactic structure and resolved stellar populations in general. An estimated 1.2 x 108 stars will be sampled in DES grizY filters in the southern equatorial hemisphere. This roughly corresponds to 20% of all DES sources. Most of these stars belong to the stellar thick disk and halo of the Galaxy. 13. Vibrationally resolved electronic spectra including vibrational pre-excitation: Theory and application to VIPER spectroscopy Science.gov (United States) von Cosel, Jan; Cerezo, Javier; Kern-Michler, Daniela; Neumann, Carsten; van Wilderen, Luuk J. G. W.; Bredenbeck, Jens; Santoro, Fabrizio; Burghardt, Irene 2017-10-01 Vibrationally resolved electronic absorption spectra including the effect of vibrational pre-excitation are computed in order to interpret and predict vibronic transitions that are probed in the Vibrationally Promoted Electronic Resonance (VIPER) experiment [L. J. G. W. van Wilderen et al., Angew. Chem., Int. Ed. 53, 2667 (2014)]. To this end, we employ time-independent and time-dependent methods based on the evaluation of Franck-Condon overlap integrals and Fourier transformation of time-domain wavepacket autocorrelation functions, respectively. The time-independent approach uses a generalized version of the FCclasses method [F. Santoro et al., J. Chem. Phys. 126, 084509 (2007)]. In the time-dependent approach, autocorrelation functions are obtained by wavepacket propagation and by the evaluation of analytic expressions, within the harmonic approximation including Duschinsky rotation effects. For several medium-sized polyatomic systems, it is shown that selective pre-excitation of particular vibrational modes leads to a redshift of the low-frequency edge of the electronic absorption spectrum, which is a prerequisite for the VIPER experiment. This effect is typically most pronounced upon excitation of modes that are significantly displaced during the electronic transition, such as ring distortion modes within an aromatic π-system. Theoretical predictions as to which modes show the strongest VIPER effect are found to be in excellent agreement with experiment. 14. Resolving environmental issues in energy development: roles for the Department of Energy and its field offices Energy Technology Data Exchange (ETDEWEB) Ellickson, P.L.; Merrow, E.W. 1979-01-01 This study asks what the Department of Energy (DOE) might do to resolve environmental conflicts that arise during the implementation of energy projects or programs. We define implementation as efforts to establish an energy facility at a specific site. The environmental concerns surrounding implementation serve as touchstones of the relevance and feasibility of national energy policies. We have analyzed geothermal development in California and oil shale development in Colorado and Utah and addressed the following questions: By what processes are energy and environmental tradeoffs made. In what circumstances can DOE participation in these processes lead to a more satisfactory outcome. What options does DOE have for resolving environmetal issues and how can it choose the best option. How can DOE establish an effective working relationship with both the governmental and private groups affected by the siting and operation of energy projects. The government's most effective role in resolving environmental conflicts and uncertainties is to improve communications among the concerned parties. This role requires flexibility and evenhandedness from the government as well as an understanding of the local conditions and a commitment to appropriate local solutions. Involving local sources at every stage of the environmental impact analysis will reduce the probability of conflicts and make those that do arise more easily resolvable. 15. Resolving society's energy trilemma through the Energy Justice Metric International Nuclear Information System (INIS) Heffron, Raphael J.; McCauley, Darren; Sovacool, Benjamin K. 2015-01-01 Carbon dioxide emissions continue to increase to the detriment of society in many forms. One of the difficulties faced is the imbalance between the competing aims of economics, politics and the environment which form the trilemma of energy policy. This article advances that this energy trilemma can be resolved through energy justice. Energy justice develops the debate on energy policy to one that highlights cosmopolitanism, progresses thinking beyond economics and incorporates a new futuristic perspective. To capture these dynamics of energy justice, this research developed an Energy Justice Metric (EJM) that involves the calculation of several metrics: (1) a country (national) EJM; (2) an EJM for different energy infrastructure; and (3) an EJM which is incorporated into economic models that derive costs for energy infrastructure projects. An EJM is modeled for China, the European Union and the United States, and for different energy infrastructure in the United Kingdom. The EJM is plotted on a Ternary Phase Diagram which is used in the sciences for analyzing the relationship (trilemma) of three forms of matter. The development of an EJM can provide a tool for decision-making on energy policy and one that solves the energy trilemma with a just and equitable approach. - Highlights: • Energy justice advances energy policy with cosmopolitanism and new economic-thinking. • An Energy Justice Metric is developed and captures the dynamics of energy justice. • The Energy Justice Metric (EJM) compares countries, and energy infrastructure. • EJM provides an energy policy decision-making tool that is just and equitable. 16. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary Energy Technology Data Exchange (ETDEWEB) 1981-05-01 Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS) 17. The role of solar energy in resolving global problems International Nuclear Information System (INIS) Kendall, H.W. 1993-01-01 Solar energy, and other alternate energy sources, including improved energy efficiency, can play a significant role in the solution of the cluster of ''great problems'' that face the present generation. These problems are related to, first, environmental damage, second, management of critical resources, and lastly, spiraling population growth. Some aspects of these linked difficulties are not yet well comprehended, even within the environmental community, though their neglect could prove to be very serious. It was the principal purpose of the paper to address those hidden risks. Seeking prompt and effective solutions to these problems is now a most urgent matter. On November 18, 1992, the Union of Concerned Scientists released a document called ''World Scientists'' ''Warning to Humanity''. The document outlined the most important challenges and set out the principal elements required to deal with them. It was signed by some 1,600 scientists from around the world, including the leaders of a substantial number of national honorary, scientific societies. In what follows, relevant elements of that statement are reviewed to set the stage for a description of solar energy's role in dealing with the situation that the world faces 18. The Dosepix detector—an energy-resolving photon-counting pixel detector for spectrometric measurements CERN Document Server Zang, A; Ballabriga, R; Bisello, F; Campbell, M; Celi, J C; Fauler, A; Fiederle, M; Jensch, M; Kochanski, N; Llopart, X; Michel, N; Mollenhauer, U; Ritter, I; Tennert, F; Wölfel, S; Wong, W; Michel, T 2015-01-01 The Dosepix detector is a hybrid photon-counting pixel detector based on ideas of the Medipix and Timepix detector family. 1 mm thick cadmium telluride and 300 μm thick silicon were used as sensor material. The pixel matrix of the Dosepix consists of 16 x 16 square pixels with 12 rows of (200 μm)2 and 4 rows of (55 μm)2 sensitive area for the silicon sensor layer and 16 rows of pixels with 220 μm pixel pitch for CdTe. Besides digital energy integration and photon-counting mode, a novel concept of energy binning is included in the pixel electronics, allowing energy-resolved measurements in 16 energy bins within one acquisition. The possibilities of this detector concept range from applications in personal dosimetry and energy-resolved imaging to quality assurance of medical X-ray sources by analysis of the emitted photon spectrum. In this contribution the Dosepix detector, its response to X-rays as well as spectrum measurements with Si and CdTe sensor layer are presented. Furthermore, a first evaluation wa... 19. Optimal ''image-based'' weighting for energy-resolved CT Energy Technology Data Exchange (ETDEWEB) Schmidt, Taly Gilat [Department of Biomedical Engineering, Marquette University, Milwaukee, Wisconsin 53201 (United States) 2009-07-15 This paper investigates a method of reconstructing images from energy-resolved CT data with negligible beam-hardening artifacts and improved contrast-to-nosie ratio (CNR) compared to conventional energy-weighting methods. Conceptually, the investigated method first reconstructs separate images from each energy bin. The final image is a linear combination of the energy-bin images, with the weights chosen to maximize the CNR in the final image. The optimal weight of a particular energy-bin image is derived to be proportional to the contrast-to-noise-variance ratio in that image. The investigated weighting method is referred to as ''image-based'' weighting, although, as will be described, the weights can be calculated and the energy-bin data combined prior to reconstruction. The performance of optimal image-based energy weighting with respect to CNR and beam-hardening artifacts was investigated through simulations and compared to that of energy integrating, photon counting, and previously studied optimal ''projection-based'' energy weighting. Two acquisitions were simulated: dedicated breast CT and a conventional thorax scan. The energy-resolving detector was simulated with five energy bins. Four methods of estimating the optimal weights were investigated, including task-specific and task-independent methods and methods that require a single reconstruction versus multiple reconstructions. Results demonstrated that optimal image-based weighting improved the CNR compared to energy-integrating weighting by factors of 1.15-1.6 depending on the task. Compared to photon-counting weighting, the CNR improvement ranged from 1.0 to 1.3. The CNR improvement factors were comparable to those of projection-based optimal energy weighting. The beam-hardening cupping artifact increased from 5.2% for energy-integrating weighting to 12.8% for optimal projection-based weighting, while optimal image-based weighting reduced the cupping to 0 20. Decision analytic tools for resolving uncertainty in the energy debate International Nuclear Information System (INIS) Renn, O. 1986-01-01 Within the context of a Social Compatibility Study on Energy Supply Systems a complex decision making model was used to incorporate scientific expertize and public participation into the process of policy formulation and evaluation. The study was directed by the program group ''Technology and Society'' of the Nuclear Research Centre Juelich. It consisted of three parts: First, with the aid of value tree analysis the whole spectrum of concern and dimensions relevant to the energy issue in Germany was collected and structured in a combined value tree representing the values and criteria of nine important interest groups in the Federal Republic of Germany. Second, the revealed criteria were translated into indicators. Four different energy scenarios were evaluated with respect to each indicator making use of physical measurement, literature review and expert surveys. Third, the weights for each indicator were elicited by interviewing randomly chosen citizens. Those citizens were informed about the scenarios and their impacts prior to the weighting process in a four day seminar. As a result most citizens favoured more moderate energy scenarios assigning high priority to energy conservation. Nuclear energy was perceived as necessary energy source in the long run, but should be restricted to meet only the demand that cannot be covered by other energy means. (orig.) 1. Kalaeloa Energy System Redevelopment Options Including Advanced Microgrids. Energy Technology Data Exchange (ETDEWEB) Hightower, Marion Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Baca, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); VanderMey, Carissa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2017-03-01 In June 2016, the Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy (EERE) in collaboration with the Renewable Energy Branch for the Hawaii State Energy Office (HSEO), the Hawaii Community Development Authority (HCDA), the United States Navy (Navy), and Sandia National Laboratories (Sandia) established a project to 1) assess the current functionality of the energy infrastructure at the Kalaeloa Community Development District, and 2) evaluate options to use both existing and new distributed and renewable energy generation and storage resources within advanced microgrid frameworks to cost-effectively enhance energy security and reliability for critical stakeholder needs during both short-term and extended electric power outages. This report discusses the results of a stakeholder workshop and associated site visits conducted by Sandia in October 2016 to identify major Kalaeloa stakeholder and tenant energy issues, concerns, and priorities. The report also documents information on the performance and cost benefits of a range of possible energy system improvement options including traditional electric grid upgrade approaches, advanced microgrid upgrades, and combined grid/microgrid improvements. The costs and benefits of the different improvement options are presented, comparing options to see how well they address the energy system reliability, sustainability, and resiliency priorities identified by the Kalaeloa stakeholders. 2. Time Resolved Energy Transfer and Photodissociation of Vibrationally Excited Molecules National Research Council Canada - National Science Library Crim, F. F 2007-01-01 ...) in solution and in the gas phase. This second experiment is one of the few direct comparisons of intramolecular vibrational energy flow in a solvated molecule with that in the same molecule isolated in a gas... 3. Highly-resolved modeling of personal transportation energy consumption in the United States International Nuclear Information System (INIS) Muratori, Matteo; Moran, Michael J.; Serra, Emmanuele; Rizzoni, Giorgio 2013-01-01 This paper centers on the estimation of the total primary energy consumption for personal transportation in the United States, to include gasoline and/or electricity consumption, depending on vehicle type. The bottom-up sector-based estimation method introduced here contributes to a computational tool under development at The Ohio State University for assisting decision making in energy policy, pricing, and investment. In order to simulate highly-resolved consumption profiles three main modeling steps are needed: modeling the behavior of drivers, generating realistic driving profiles, and simulating energy consumption of different kinds of vehicles. The modeling proposed allows for evaluating the impact of plug-in electric vehicles on the electric grid – especially at the distribution level. It can serve as a tool to compare different vehicle types and assist policy-makers in estimating their impact on primary energy consumption and the role transportation can play to reduce oil dependency. - Highlights: • Modeling primary energy consumption for personal transportation in the United States. • Behavior of drivers has been simulated in order to establish when driving events occur and the length of each event. • Realistic driving profiles for each driving event are generated using a stochastic model. • The model allows for comparing the initial cost of different vehicles and their expected energy-use operating cost. • Evaluation of the impact of PEVs on the electric grid – especially at the distribution level – can be performed 4. Resolving Shifting Patterns of Muscle Energy Use in Swimming Fish Science.gov (United States) Gerry, Shannon P.; Ellerby, David J. 2014-01-01 Muscle metabolism dominates the energy costs of locomotion. Although in vivo measures of muscle strain, activity and force can indicate mechanical function, similar muscle-level measures of energy use are challenging to obtain. Without this information locomotor systems are essentially a black box in terms of the distribution of metabolic energy. Although in situ measurements of muscle metabolism are not practical in multiple muscles, the rate of blood flow to skeletal muscle tissue can be used as a proxy for aerobic metabolism, allowing the cost of particular muscle functions to be estimated. Axial, undulatory swimming is one of the most common modes of vertebrate locomotion. In fish, segmented myotomal muscles are the primary power source, driving undulations of the body axis that transfer momentum to the water. Multiple fins and the associated fin muscles also contribute to thrust production, and stabilization and control of the swimming trajectory. We have used blood flow tracers in swimming rainbow trout (Oncorhynchus mykiss) to estimate the regional distribution of energy use across the myotomal and fin muscle groups to reveal the functional distribution of metabolic energy use within a swimming animal for the first time. Energy use by the myotomal muscle increased with speed to meet thrust requirements, particularly in posterior myotomes where muscle power outputs are greatest. At low speeds, there was high fin muscle energy use, consistent with active stability control. As speed increased, and fins were adducted, overall fin muscle energy use declined, except in the caudal fin muscles where active fin stiffening is required to maintain power transfer to the wake. The present data were obtained under steady-state conditions which rarely apply in natural, physical environments. This approach also has potential to reveal the mechanical factors that underlie changes in locomotor cost associated with movement through unsteady flow regimes. PMID:25165858 5. Resolving shifting patterns of muscle energy use in swimming fish. Directory of Open Access Journals (Sweden) Shannon P Gerry Full Text Available Muscle metabolism dominates the energy costs of locomotion. Although in vivo measures of muscle strain, activity and force can indicate mechanical function, similar muscle-level measures of energy use are challenging to obtain. Without this information locomotor systems are essentially a black box in terms of the distribution of metabolic energy. Although in situ measurements of muscle metabolism are not practical in multiple muscles, the rate of blood flow to skeletal muscle tissue can be used as a proxy for aerobic metabolism, allowing the cost of particular muscle functions to be estimated. Axial, undulatory swimming is one of the most common modes of vertebrate locomotion. In fish, segmented myotomal muscles are the primary power source, driving undulations of the body axis that transfer momentum to the water. Multiple fins and the associated fin muscles also contribute to thrust production, and stabilization and control of the swimming trajectory. We have used blood flow tracers in swimming rainbow trout (Oncorhynchus mykiss to estimate the regional distribution of energy use across the myotomal and fin muscle groups to reveal the functional distribution of metabolic energy use within a swimming animal for the first time. Energy use by the myotomal muscle increased with speed to meet thrust requirements, particularly in posterior myotomes where muscle power outputs are greatest. At low speeds, there was high fin muscle energy use, consistent with active stability control. As speed increased, and fins were adducted, overall fin muscle energy use declined, except in the caudal fin muscles where active fin stiffening is required to maintain power transfer to the wake. The present data were obtained under steady-state conditions which rarely apply in natural, physical environments. This approach also has potential to reveal the mechanical factors that underlie changes in locomotor cost associated with movement through unsteady flow regimes. 6. An energy dispersive time resolved liquid surface reflectometer CERN Document Server Garrett, R F; King, D J; Dowling, T L; Fullagar, W 2001-01-01 Two designs are presented for an energy dispersive liquid surface reflectometer with time resolution in the milli-second domain. The designs utilise rotating crystal and Laue analyser optics respectively to energy analyse a pink synchrotron X-ray beam after reflection from a liquid surface. Some performance estimates are presented, along with results of a test experiment using a laboratory source and solid state detector. 7. Ionic liquids, electrolyte solutions including the ionic liquids, and energy storage devices including the ionic liquids Science.gov (United States) Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W. 2015-12-08 An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid. 8. Energy and angle resolved ion scattering spectroscopy: new possibilities for surface analysis International Nuclear Information System (INIS) Hellings, G.J.A. 1986-01-01 In this thesis the design and development of a novel, very sensitive and high-resolving spectrometer for surface analysis is described. This spectrometer is designed for Energy and Angle Resolved Ion Scattering Spectroscopy (EARISS). There are only a few techniques that are sensitive enough to study the outermost atomic layer of surfaces. One of these techniques, Low-Energy Ion Scattering (LEIS), is discussed in chapter 2. Since LEIS is destructive, it is important to make a very efficient use of the scattered ions. This makes it attractive to simultaneously carry out energy and angle dependent measurements (EARISS). (Auth.) 9. Time-resolved energy transduction in a quantum capacitor. Science.gov (United States) Jung, Woojin; Cho, Doohee; Kim, Min-Kook; Choi, Hyoung Joon; Lyo, In-Whan 2011-08-23 The capability to deposit charge and energy quantum-by-quantum into a specific atomic site could lead to many previously unidentified applications. Here we report on the quantum capacitor formed by a strongly localized field possessing such capability. We investigated the charging dynamics of such a capacitor by using the unique scanning tunneling microscopy that combines nanosecond temporal and subangstrom spatial resolutions, and by using Si(001) as the electrode as well as the detector for excitations produced by the charging transitions. We show that sudden switching of a localized field induces a transiently empty quantum dot at the surface and that the dot acts as a tunable excitation source with subangstrom site selectivity. The timescale in the deexcitation of the dot suggests the formation of long-lived, excited states. Our study illustrates that a quantum capacitor has serious implications not only for the bottom-up nanotechnology but also for future switching devices. 10. ERICA: an energy resolving photon counting readout ASIC for X-ray in-line cameras Science.gov (United States) Macias-Montero, J.-G.; Sarraj, M.; Chmeissani, M.; Moore, T.; Casanova, R.; Martinez, R.; Puigdengoles, C.; Prats, X.; Kolstein, M. 2016-12-01 We present ERICA (Energy Resolving Inline X-ray Camera) a photon-counting readout ASIC, with 6 energy bins. The ASIC is composed of a matrix of 8 × 20 pixels controlled by a global digital controller and biased with 7 independent digital to analog converters (DACs) and a band-gap current reference. The pixel analog front-end includes a charge sensitive amplifier with 16 mV/ke- gain and dynamic range of 45 ke-. ERICA has programmable pulse width, an adjustable constant current feedback resistor, a linear test pulse generator, and six discriminators with 6-bit local threshold adjustment. The pixel digital back-end includes the digital controller, 8 counters of 8-bit depth, half-full buffer flag for any of the 8 counters, a 74-bit shadow/shift register, a 74-bit configuration latch, and charge sharing compensation processing to perform the energy classification and counting operations of every detected photon in 1 μ s. The pixel size is 330 μm × 330 μm and its average consumption is 150 μW. Implemented in TSMC 0.25 μm CMOS process, the ASIC pixel's equivalent noise charge (ENC) is 90 e- RMS connected to a 1 mm thickness matching CdTe detector biased at -300 V with a total leakage current of 20 nA. 11. Electrolyte solutions including a phosphoranimine compound, and energy storage devices including same Science.gov (United States) Klaehn, John R.; Dufek, Eric J.; Rollins, Harry W.; Harrup, Mason K.; Gering, Kevin L. 2017-09-12 An electrolyte solution comprising at least one phosphoranimine compound and a metal salt. The at least one phosphoranimine compound comprises a compound of the chemical structure ##STR00001## where X is an organosilyl group or a tert-butyl group and each of R.sup.1, R.sup.2, and R.sup.3 is independently selected from the group consisting of an alkyl group, an aryl group, an alkoxy group, or an aryloxy group. An energy storage device including the electrolyte solution is also disclosed. 12. Mapping unoccupied electronic states of freestanding graphene by angle-resolved low-energy electron transmission OpenAIRE Wicki Flavio; Longchamp Jean-Nicolas; Latychevskaia Tatiana; Escher Conrad; Fink Hans-Werner 2016-01-01 We report angle-resolved electron transmission measurements through freestanding graphene sheets in the energy range of 18 to 30 eV above the Fermi level. The measurements are carried out in a low-energy electron point source microscope, which allows simultaneously probing the transmission for a large angular range. The characteristics of low-energy electron transmission through graphene depend on its electronic structure above the vacuum level. The experimental technique described here allow... 13. Energy storage device including a redox-enhanced electrolyte Science.gov (United States) Stucky, Galen; Evanko, Brian; Parker, Nicholas; Vonlanthen, David; Auston, David; Boettcher, Shannon; Chun, Sang-Eun; Ji, Xiulei; Wang, Bao; Wang, Xingfeng; Chandrabose, Raghu Subash 2017-08-08 An electrical double layer capacitor (EDLC) energy storage device is provided that includes at least two electrodes and a redox-enhanced electrolyte including two redox couples such that there is a different one of the redox couples for each of the electrodes. When charged, the charge is stored in Faradaic reactions with the at least two redox couples in the electrolyte and in a double-layer capacitance of a porous carbon material that comprises at least one of the electrodes, and a self-discharge of the energy storage device is mitigated by at least one of electrostatic attraction, adsorption, physisorption, and chemisorption of a redox couple onto the porous carbon material. 14. Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE Directory of Open Access Journals (Sweden) Nicholas P. Borges 2018-02-01 Full Text Available The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. Here we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolved neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution. 15. Proposal to Include Electrical Energy in the Industrial Return Statistics CERN Document Server 2003-01-01 At its 108th session on the 20 June 1997, the Council approved the Report of the Finance Committee Working Group on the Review of CERN Purchasing Policy and Procedures. Among other topics, the report recommended the inclusion of utility supplies in the calculation of the return statistics as soon as the relevant markets were deregulated, without reaching a consensus on the exact method of calculation. At its 296th meeting on the 18 June 2003, the Finance Committee approved a proposal to award a contract for the supply of electrical energy (CERN/FC/4693). The purpose of the proposal in this document is to clarify the way electrical energy will be included in future calculations of the return statistics. The Finance Committee is invited: 1. to agree that the full cost to CERN of electrical energy (excluding the cost of transport) be included in the Industrial Service return statistics; 2. to recommend that the Council approves the corresponding amendment to the Financial Rules set out in section 2 of this docum... 16. Survey of state legislative programs that include passive solar energy Energy Technology Data Exchange (ETDEWEB) Weiss, S 1979-06-01 This report surveys and evaluates state-level solar-incentive programs, including passive solar energy. The range of programs examined focuses on financial and legal incentives designed to speed the implementation of solar heating, cooling, and hot water systems. They have been evaluated by probing the wording of the incentive legislation and by interviewing state program administrators in each state to determine: (1) the extent, if any, of passive inclusion in solar-incentive programs, and (2) the level of success that various implementation techniques have achieved for encouraging passive solar designs as opposed to the more-commonly-understood active systems. Because no states have initiated incentive legislation designed exclusively to encourage passive solar techniques, it has been essential to determine whether legislative programs explicitly or implicitly include passive solar or if they explicitly exclude it. 17. Time-Resolved Tandem Faraday Cup Development for High Energy TNSA Particles Science.gov (United States) Padalino, S.; Simone, A.; Turner, E.; Ginnane, M. K.; Glisic, M.; Kousar, B.; Smith, A.; Sangster, C.; Regan, S. 2015-11-01 MTW and OMEGA EP Lasers at LLE utilize ultra-intense laser light to produce high-energy ion pulses through Target Normal Sheath Acceleration (TNSA). A Time Resolved Tandem Faraday Cup (TRTF) was designed and built to collect and differentiate protons from heavy ions (HI) produced during TNSA. The TRTF includes a replaceable thickness absorber capable of stopping a range of user-selectable HI emitted from TNSA plasma. HI stop within the primary cup, while less massive particles continue through and deposit their remaining charge in the secondary cup, releasing secondary electrons in the process. The time-resolved beam current generated in each cup will be measured on a fast storage scope in multiple channels. A charge-exchange foil at the TRTF entrance modifies the charge state distribution of HI to a known distribution. Using this distribution and the time of flight of the HI, the total HI current can be determined. Initial tests of the TRTF have been made using a proton beam produced by SUNY Geneseo's 1.7 MV Pelletron accelerator. A substantial reduction in secondary electron production, from 70% of the proton beam current at 2MeV down to 0.7%, was achieved by installing a pair of dipole magnet deflectors which successfully returned the electrons to the cups in the TRTF. Ultimately the TRTF will be used to normalize a variety of nuclear physics cross sections and stopping power measurements. Based in part upon work supported by a DOE NNSA Award#DE-NA0001944. 18. Potential of mediation for resolving environmental disputes related to energy facilities Energy Technology Data Exchange (ETDEWEB) None 1979-12-01 This study assesses the potential of mediation as a tool for resolving disputes related to the environmental regulation of new energy facilities and identifies possible roles the Federal government might play in promoting the use of mediation. These disputes result when parties challenge an energy project on the basis of its potential environmental impacts. The paper reviews the basic theory of mediation, evaluates specific applications of mediation to recent environmental disputes, discusses the views of environmental public-interest groups towards mediation, and identifies types of energy facility-related disputes where mediation could have a significant impact. Finally, potential avenues for the Federal government to encourage use of this tool are identified. 19. Solving the high energy evolution equation including running coupling corrections International Nuclear Information System (INIS) Albacete, Javier L.; Kovchegov, Yuri V. 2007-01-01 We study the solution of the nonlinear Balitsky-Kovchegov evolution equation with the recently calculated running coupling corrections [I. I. Balitsky, Phys. Rev. D 75, 014001 (2007). and Y. Kovchegov and H. Weigert, Nucl. Phys. A784, 188 (2007).]. Performing a numerical solution we confirm the earlier result of Albacete et al. [Phys. Rev. D 71, 014003 (2005).] (obtained by exploring several possible scales for the running coupling) that the high energy evolution with the running coupling leads to a universal scaling behavior for the dipole-nucleus scattering amplitude, which is independent of the initial conditions. It is important to stress that the running coupling corrections calculated recently significantly change the shape of the scaling function as compared to the fixed coupling case, in particular, leading to a considerable increase in the anomalous dimension and to a slow-down of the evolution with rapidity. We then concentrate on elucidating the differences between the two recent calculations of the running coupling corrections. We explain that the difference is due to an extra contribution to the evolution kernel, referred to as the subtraction term, which arises when running coupling corrections are included. These subtraction terms were neglected in both recent calculations. We evaluate numerically the subtraction terms for both calculations, and demonstrate that when the subtraction terms are added back to the evolution kernels obtained in the two works the resulting dipole amplitudes agree with each other. We then use the complete running coupling kernel including the subtraction term to find the numerical solution of the resulting full nonlinear evolution equation with the running coupling corrections. Again the scaling regime is recovered at very large rapidity with the scaling function unaltered by the subtraction term 20. Interim performance criteria for photovoltaic energy systems. [Glossary included Energy Technology Data Exchange (ETDEWEB) DeBlasio, R.; Forman, S.; Hogan, S.; Nuss, G.; Post, H.; Ross, R.; Schafft, H. 1980-12-01 This document is a response to the Photovoltaic Research, Development, and Demonstration Act of 1978 (P.L. 95-590) which required the generation of performance criteria for photovoltaic energy systems. Since the document is evolutionary and will be updated, the term interim is used. More than 50 experts in the photovoltaic field have contributed in the writing and review of the 179 performance criteria listed in this document. The performance criteria address characteristics of present-day photovoltaic systems that are of interest to manufacturers, government agencies, purchasers, and all others interested in various aspects of photovoltaic system performance and safety. The performance criteria apply to the system as a whole and to its possible subsystems: array, power conditioning, monitor and control, storage, cabling, and power distribution. They are further categorized according to the following performance attributes: electrical, thermal, mechanical/structural, safety, durability/reliability, installation/operation/maintenance, and building/site. Each criterion contains a statement of expected performance (nonprescriptive), a method of evaluation, and a commentary with further information or justification. Over 50 references for background information are also given. A glossary with definitions relevant to photovoltaic systems and a section on test methods are presented in the appendices. Twenty test methods are included to measure performance characteristics of the subsystem elements. These test methods and other parts of the document will be expanded or revised as future experience and needs dictate. 1. Potential energy surface for ? dissociation including spin-orbit effects Science.gov (United States) Siebert, Matthew R.; Aquino, Adelia J. A.; de Jong, Wibe A.; Granucci, Giovanni; Hase, William L. 2012-10-01 Previous experiments [J. Phys. Chem. A 116, 2833 (2012)] have studied the dissociation of 1,2-diiodoethane radical cation ( ? ) and found a one-dimensional distribution of translational energy, an odd finding considering most product relative translational energy distributions are two-dimensional. The goal of this study is to obtain an accurate understanding of the potential energy surface (PES) topology for the unimolecular decomposition reaction ? → C2H4I+ + I•. This is done through comparison of many single-reference electronic structure methods, coupled-cluster single-point (energy) calculations, and multi-reference energy calculations used to quantify spin-orbit (SO) coupling effects. We find that the structure of the ? reactant has a substantial effect on the role of the SO coupling on the reaction energy. Both the BHandH and MP2 theories with an ECP/6-31++G** basis set, and without SO coupling corrections, provide accurate models for the reaction energetics. MP2 theory gives an unsymmetric structure with different C-I bond lengths, resulting in a SO energy for ? similar to that for the product I-atom and a negligible SO correction to the reaction energy. In contrast, DFT gives a symmetric structure for ? , similar to that of the neutral C2H4I2 parent, resulting in a substantial SO correction and increasing the reaction energy by 6.0-6.5 kcalmol-1. Also, we find that, for this system, coupled-cluster single-point energy calculations are inaccurate, since a small change in geometry can lead to a large change in energy. 2. Survey of Public Understanding on Energy Resources including Nuclear Energy (I) International Nuclear Information System (INIS) Park, Se-Moon; Song, Sun-Ja 2007-01-01 Women in Nuclear-Korea (WINK) surveyed the public understanding on various energy resources in early September 2006 to offer the result for establishment of the nuclear communication policy. The reason why this survey includes other energy resources is because the previous works are only limited on nuclear energy, and also aimed to know the public's opinion on the present communication skill of nuclear energy for the public understanding. The present study is purposed of having data how public understands nuclear energy compared to other energies, such as fossil fuels, hydro power, and other sustainable energies. The data obtained from this survey have shown different results according to the responded group; age, gender, residential area, etc. Responded numbers are more than 2,000 of general public and university students. The survey result shows that nuclear understanding is more negative in women than in men, and is more negative in young than older age 3. Angular and mass resolved energy distribution measurements with a gallium liquid metal ion source International Nuclear Information System (INIS) Marriott, Philip 1987-06-01 Ionisation and energy broadening mechanisms relevant to liquid metal ion sources are discussed. A review of experimental results giving a picture of source operation and a discussion of the emission mechanisms thought to occur for the ionic species and droplets emitted is presented. Further work is suggested by this review and an analysis system for angular and mass resolved energy distribution measurements of liquid metal ion source beams has been constructed. The energy analyser has been calibrated and a series of measurements, both on and off the beam axis, of 69 Ga + , Ga ++ and Ga 2 + ions emitted at various currents from a gallium source has been performed. A comparison is made between these results and published work where possible, and the results are discussed with the aim of determining the emission and energy spread mechanisms operating in the gallium liquid metal ion source. (author) 4. Energy-resolved X-ray imaging: Material decomposition methods adapted for spectrometric detectors International Nuclear Information System (INIS) Potop, Alexandra-Iulia 2014-01-01 Scintillator based integrating detectors are used in conventional X-ray imaging systems. The new generation of energy-resolved semiconductor radiation detectors, based on CdTe/CdZnTe, allows counting the number of photons incident on the detector and measure their energy. The LDET laboratory developed pixelated spectrometric detectors for X-ray imaging, associated with a fast readout circuit, which allows working with high fluxes and while maintaining a good energy resolution. With this thesis, we bring our contribution to data processing acquired in radiographic and tomographic modes for material components quantification. Osteodensitometry was chosen as a medical application. Radiographic data was acquired by simulation with a detector which presents imperfections as charge sharing and pile-up. The methods chosen for data processing are based on a material decomposition approach. Basis material decomposition models the linear attenuation coefficient of a material as a linear combination of the attenuations of two basis materials based on the energy related information acquired in each energy bin. Two approaches based on a calibration step were adapted for our application. The first is the polynomial approach used for standard dual energy acquisitions, which was applied for two and three energies acquired with the energy-resolved detector. We searched the optimal configuration of bins. We evaluated the limits of the polynomial approach with a study on the number of channels. To go further and take benefit of the elevated number of bins acquired with the detectors developed in our laboratory, a statistical approach implemented in our laboratory was adapted for the material decomposition method for quantifying mineral content in bone. The two approaches were compared using figures of merit as bias and noise over the lengths of the materials traversed by X-rays. An experimental radiographic validation of the two approaches was done in our laboratory with a 5. Track structure for low energy ions including charge exchange processes International Nuclear Information System (INIS) Uehara, S.; Nikjoo, H. 2002-01-01 The model and development is described of a new generation of Monte Carlo track structure codes. The code LEAHIST simulates full slowing down of low-energy proton history tracks in the range 1 keV-1 MeV and the code LEAHIST simulates low-energy alpha particle history tracks in the range 1 keV-8 MeV in water. All primary ion interactions are followed down to 1 keV and all electrons to 1 eV. Tracks of secondary electrons ejected by ions were traced using the electron code KURBUC. Microdosimetric parameters derived by analysis of generated tracks are presented. (author) 6. Time-resolved energy spectrum of a pseudospark-produced high-brightness electron beam International Nuclear Information System (INIS) Myers, T.J.; Ding, B.N.; Rhee, M.J. 1992-01-01 The pseudospark, a fast low-pressure gas discharge between a hollow cathode and a planar anode, is found to be an interesting high-brightness electron beam source. Typically, all electron beam produced in the pseudospark has the peak current of ∼1 kA, pulse duration of ∼50 ns, and effective emittance of ∼100 mm-mrad. The energy information of this electron beam, however, is least understood due to the difficulty of measuring a high-current-density beam that is partially space-charge neutralized by the background ions produced in the gas. In this paper, an experimental study of the time-resolved energy spectrum is presented. The pseudospark produced electron beam is injected into a vacuum through a small pinhole so that the electrons without background ions follow single particle motion; the beam is sent through a negative biased electrode and the only portion of beam whose energy is greater than the bias voltage can pass through the electrode and the current is measured by a Faraday cup. The Faraday cup signals with various bias voltage are recorded in a digital oscilloscope. The recorded waveforms are then numerically analyzed to construct a time-resolved energy spectrum. Preliminary results are presented 7. Comparison of tropical cyclogenesis processes in climate model and cloud-resolving model simulations using moist static energy budget analysis Science.gov (United States) Wing, Allison; Camargo, Suzana; Sobel, Adam; Kim, Daehyun; Murakami, Hiroyuki; Reed, Kevin; Vecchi, Gabriel; Wehner, Michael; Zarzycki, Colin; Zhao, Ming 2017-04-01 In recent years, climate models have improved such that high-resolution simulations are able to reproduce the climatology of tropical cyclone activity with some fidelity and show some skill in seasonal forecasting. However biases remain in many models, motivating a better understanding of what factors control the representation of tropical cyclone activity in climate models. We explore the tropical cyclogenesis processes in five high-resolution climate models, including both coupled and uncoupled configurations. Our analysis framework focuses on how convection, moisture, clouds and related processes are coupled and employs budgets of column moist static energy and the spatial variance of column moist static energy. The latter was originally developed to study the mechanisms of tropical convective organization in idealized cloud-resolving models, and allows us to quantify the different feedback processes responsible for the amplification of moist static energy anomalies associated with the organization of convection and cyclogenesis. We track the formation and evolution of tropical cyclones in the climate model simulations and apply our analysis both along the individual tracks and composited over many tropical cyclones. We then compare the genesis processes; in particular, the role of cloud-radiation interactions, to those of spontaneous tropical cyclogenesis in idealized cloud-resolving model simulations. 8. Energy-resolved X-ray detectors: the future of diagnostic imaging OpenAIRE Pacella, Danilo 2015-01-01 Danilo Pacella ENEA-Frascati, Rome, Italy Abstract: This paper presents recent progress in the field of X-ray detectors, which could play a role in medical imaging in the near future, with special attention to the new generation of complementary metal-oxide semiconductor (C-MOS) imagers, working in photon counting, that opened the way to the energy-resolved X-ray imaging. A brief description of the detectors used so far in medical imaging (photographic films, imaging plates, flat panel detec... 9. Energy-resolved X-ray detectors: the future of diagnostic imaging Directory of Open Access Journals (Sweden) Pacella D 2015-01-01 Full Text Available Danilo Pacella ENEA-Frascati, Rome, Italy Abstract: This paper presents recent progress in the field of X-ray detectors, which could play a role in medical imaging in the near future, with special attention to the new generation of complementary metal-oxide semiconductor (C-MOS imagers, working in photon counting, that opened the way to the energy-resolved X-ray imaging. A brief description of the detectors used so far in medical imaging (photographic films, imaging plates, flat panel detectors, together with the most relevant imaging quality parameters, shows differences between, and advantages of these new C-MOS imagers. X-ray energy-resolved imaging is very attractive not only for the increase of contrast but even for the capability of detecting the nature and composition of the material or tissue to be investigated. Since the X-ray absorption coefficients of the different parts or organs of the patient (object are strongly dependent on the X-ray photon energy, this multienergy ("colored" X-ray imaging could increase enormously the probing capabilities. While dual-energy imaging is now a reality in medical practice, multienergy is still in its early stage, but a promising research activity. Based on this new technique of color X-ray imaging, the entire scheme of source–object–detector could be revised in the future, optimizing spectrum and detector to the nature and composition of the target to be investigated. In this view, a transition to a set of monoenergetic X-ray lines, suitably chosen in energy and intensity, could be envisaged, instead of the present continuous spectra. Keywords: X-ray detectors, X-ray medical imaging, C-MOS imagers, dual and multienergy CT 10. Novel energy resolving x-ray pinhole camera on Alcator C-Moda) Science.gov (United States) Pablant, N. A.; Delgado-Aparicio, L.; Bitter, M.; Brandstetter, S.; Eikenberry, E.; Ellis, R.; Hill, K. W.; Hofer, P.; Schneebeli, M. 2012-10-01 A new energy resolving x-ray pinhole camera has been recently installed on Alcator C-Mod. This diagnostic is capable of 1D or 2D imaging with a spatial resolution of ≈1 cm, an energy resolution of ≈1 keV in the range of 3.5-15 keV and a maximum time resolution of 5 ms. A novel use of a Pilatus 2 hybrid-pixel x-ray detector [P. Kraft et al., J. Synchrotron Rad. 16, 368 (2009), 10.1107/S0909049509009911] is employed in which the lower energy threshold of individual pixels is adjusted, allowing regions of a single detector to be sensitive to different x-ray energy ranges. Development of this new detector calibration technique was done as a collaboration between PPPL and Dectris Ltd. The calibration procedure is described, and the energy resolution of the detector is characterized. Initial data from this installation on Alcator C-Mod is presented. This diagnostic provides line-integrated measurements of impurity emission which can be used to determine impurity concentrations as well as the electron energy distribution. 11. Investigation of dissimilar metal welds by energy-resolved neutron imaging. Science.gov (United States) Tremsin, Anton S; Ganguly, Supriyo; Meco, Sonia M; Pardal, Goncalo R; Shinohara, Takenao; Feller, W Bruce 2016-08-01 A nondestructive study of the internal structure and compositional gradient of dissimilar metal-alloy welds through energy-resolved neutron imaging is described in this paper. The ability of neutrons to penetrate thick metal objects (up to several cm) provides a unique possibility to examine samples which are opaque to other conventional techniques. The presence of Bragg edges in the measured neutron transmission spectra can be used to characterize the internal residual strain within the samples and some microstructural features, e.g. texture within the grains, while neutron resonance absorption provides the possibility to map the degree of uniformity in mixing of the participating alloys and intermetallic formation within the welds. In addition, voids and other defects can be revealed by the variation of neutron attenuation across the samples. This paper demonstrates the potential of neutron energy-resolved imaging to measure all these characteristics simultaneously in a single experiment with sub-mm spatial resolution. Two dissimilar alloy welds are used in this study: Al autogenously laser welded to steel, and Ti gas metal arc welded (GMAW) to stainless steel using Cu as a filler alloy. The cold metal transfer variant of the GMAW process was used in joining the Ti to the stainless steel in order to minimize the heat input. The distributions of the lattice parameter and texture variation in these welds as well as the presence of voids and defects in the melt region are mapped across the welds. The depth of the thermal front in the Al-steel weld is clearly resolved and could be used to optimize the welding process. A highly textured structure is revealed in the Ti to stainless steel joint where copper was used as a filler wire. The limited diffusion of Ti into the weld region is also verified by the resonance absorption. 12. City and mobility: towards an integrated approach to resolve energy problems Directory of Open Access Journals (Sweden) Carmela Gargiulo 2012-07-01 Full Text Available The issue of integration between city, mobility and energy plays a central role in the current EU policies, aimed at achieving energy saving targets, independence from fossil fuels and enhance of the urban systems resilience, but the strategies of the single states are, however, still far from its implementation. This paper proposes a reading of the current policies and of the recent initiatives aimed at improving the energy efficiency of settlements, implemented at both Community and national level, aimed at laying the groundwork for the definition of an integrated approach between city and mobility to resolve energy problem. Therefore, the paper is divided into six parts. The first part describes the transition from the concept of sustainability to the concept of resilience and illustrates the central role played by this one in the current urban and territorial research; the second part briefly analyzes the main and more recent European directives related to city, mobility and energy, while the third part describes how the energy problem is afforded in the current programming and planning tools. The fourth and fifth parts, are intended to describe the innovative practices promoted in some European and Italian cities concerning energy efficiency aimed at the integration between urban and transport systems. The last part of the paper, finally, deals with the definition of a new systemic approach for achieving objectives of energy sustainability. This approach aims at integrating strategies and actions for strategies of mobility governance, based on the certain assumption that the core for the most part of energy problems is mainly represented in medium and large cities. 13. Quantitative material decomposition using spectral computed tomography with an energy-resolved photon-counting detector International Nuclear Information System (INIS) Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung 2014-01-01 Dual-energy computed tomography (CT) techniques have been used to decompose materials and characterize tissues according to their physical and chemical compositions. However, these techniques are hampered by the limitations of conventional x-ray detectors operated in charge integrating mode. Energy-resolved photon-counting detectors provide spectral information from polychromatic x-rays using multiple energy thresholds. These detectors allow simultaneous acquisition of data in different energy ranges without spectral overlap, resulting in more efficient material decomposition and quantification for dual-energy CT. In this study, a pre-reconstruction dual-energy CT technique based on volume conservation was proposed for three-material decomposition. The technique was combined with iterative reconstruction algorithms by using a ray-driven projector in order to improve the quality of decomposition images and reduce radiation dose. A spectral CT system equipped with a CZT-based photon-counting detector was used to implement the proposed dual-energy CT technique. We obtained dual-energy images of calibration and three-material phantoms consisting of low atomic number materials from the optimal energy bins determined by Monte Carlo simulations. The material decomposition process was accomplished by both the proposed and post-reconstruction dual-energy CT techniques. Linear regression and normalized root-mean-square error (NRMSE) analyses were performed to evaluate the quantitative accuracy of decomposition images. The calibration accuracy of the proposed dual-energy CT technique was higher than that of the post-reconstruction dual-energy CT technique, with fitted slopes of 0.97–1.01 and NRMSEs of 0.20–4.50% for all basis materials. In the three-material phantom study, the proposed dual-energy CT technique decreased the NRMSEs of measured volume fractions by factors of 0.17–0.28 compared to the post-reconstruction dual-energy CT technique. It was concluded that the 14. Full momentum- and energy-resolved spectral function of a 2D electronic system Science.gov (United States) Jang, Joonho; Yoo, Heun Mo; Pfeiffer, L. N.; West, K. W.; Baldwin, K. W.; Ashoori, Raymond C. 2017-11-01 The single-particle spectral function measures the density of electronic states in a material as a function of both momentum and energy, providing central insights into strongly correlated electron phenomena. Here we demonstrate a high-resolution method for measuring the full momentum- and energy-resolved electronic spectral function of a two-dimensional (2D) electronic system embedded in a semiconductor. The technique remains operational in the presence of large externally applied magnetic fields and functions even for electronic systems with zero electrical conductivity or with zero electron density. Using the technique on a prototypical 2D system, a GaAs quantum well, we uncover signatures of many-body effects involving electron-phonon interactions, plasmons, polarons, and a phonon analog of the vacuum Rabi splitting in atomic systems. 15. A tunable low-energy photon source for high-resolution angle-resolved photoemission spectroscopy International Nuclear Information System (INIS) Harter, John W.; Monkman, Eric J.; Shai, Daniel E.; Nie Yuefeng; Uchida, Masaki; Burganov, Bulat; Chatterjee, Shouvik; King, Philip D. C.; Shen, Kyle M. 2012-01-01 We describe a tunable low-energy photon source consisting of a laser-driven xenon plasma lamp coupled to a Czerny-Turner monochromator. The combined tunability, brightness, and narrow spectral bandwidth make this light source useful in laboratory-based high-resolution photoemission spectroscopy experiments. The source supplies photons with energies up to ∼7 eV, delivering under typical conditions >10 12 ph/s within a 10 meV spectral bandwidth, which is comparable to helium plasma lamps and many synchrotron beamlines. We first describe the lamp and monochromator system and then characterize its output, with attention to those parameters which are of interest for photoemission experiments. Finally, we present angle-resolved photoemission spectroscopy data using the light source and compare its performance to a conventional helium plasma lamp. 16. Time-resolved photoion imaging spectroscopy: Determining energy distribution in multiphoton absorption experiments Science.gov (United States) Qian, D. B.; Shi, F. D.; Chen, L.; Martin, S.; Bernard, J.; Yang, J.; Zhang, S. F.; Chen, Z. Q.; Zhu, X. L.; Ma, X. 2018-04-01 We propose an approach to determine the excitation energy distribution due to multiphoton absorption in the case of excited systems following decays to produce different ion species. This approach is based on the measurement of the time-resolved photoion position spectrum by using velocity map imaging spectrometry and an unfocused laser beam with a low fluence and homogeneous profile. Such a measurement allows us to identify the species and the origin of each ion detected and to depict the energy distribution using a pure Poisson's equation involving only one variable which is proportional to the absolute photon absorption cross section. A cascade decay model is used to build direct connections between the energy distribution and the probability to detect each ionic species. Comparison between experiments and simulations permits the energy distribution and accordingly the absolute photon absorption cross section to be determined. This approach is illustrated using C60 as an example. It may therefore be extended to a wide variety of molecules and clusters having decay mechanisms similar to those of fullerene molecules. 17. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra? Science.gov (United States) 2011-02-01 Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function. 18. Experimental data parameterization in the resolved resonance energy range: R-matrix theory and approximations International Nuclear Information System (INIS) Bouland, O. 2005-01-01 The paper reviews some current approximations used in R-matrix theory for calculating angular integrated nuclear cross sections. In particular, it distinguishes the SLBW and MLBW approximations of their practical applications ENDF-oriented. This paper also focuses on the problem of prior resonance parameter determination compulsory for any experimental data adjustment in the resolved resonance range. The major contribution of this paper concerns R-matrix calculations made with no approximations using the SAMMY program which is developed at the Oak Ridge National Laboratory. These new R-matrix calculations are applied to real cases which experimental data are extracted from radiative capture gamma rays measurements on 23 Na, 19 F and 238 U isotopes. Small but significant R-matrix effects show up in the wings of the resonances and especially at thermal energies when calculating the capture cross section without the classic Reich-Moore approximation. (author) 19. A mediation case for resolving the energy and environment dispute at Aliaga-Izmir, Turkey International Nuclear Information System (INIS) Mueezzinoglu, A. 2000-01-01 Aliaga town, located 50 km north of Izmir, Turkey, is facing serious air, water, and soil pollution problems of industrial origin. The town has had a widespread public reaction against the estimated environmental effects of a 500 MW power plant originally to be built by a private international company during the first half of the 1990s. This project was rejected by court order at that time, but recently a number of new power projects emerged, and overall environmental burdens had to be reconsidered. A mediation exercise to resolve the ongoing dispute against these power plant projects at Aliaga was recommended and participated in by the author in 1997. In this article the basis of the continuing environmental consent about the feared impacts of the new power plants, procedure, and results of this mediation are mentioned. The basis of the energy versus environment dispute in Aliaga are introduced. Mediation exercise and its end results have been criticized 20. Energy-resolved X-ray radiography with controlled-drift detectors at Sincrotrone Trieste Energy Technology Data Exchange (ETDEWEB) Castoldi, A. E-mail: andrea.castoldi@polimi.it; Galimberti, A.; Guazzoni, C.; Rehak, P.; Strueder, L.; Menk, R.H 2003-09-01 The Controlled-Drift Detector (CDD) is a fully depleted silicon detector that allows 2-D position sensing and energy spectroscopy of X-rays in the range 1-20 keV with excellent time resolution. Its distinctive feature is the simultaneous readout of the charge packets stored in the detector by means of a uniform electrostatic field leading to readout times of few microseconds. The advantage of this readout mechanism is twofold: (i) a higher frame rate/better time resolution with respect to the charge-coupled device which represents the reference X-ray spectroscopic imager and (ii) a lower contribution of the thermal noise due to a shorter integration time, leading to an excellent energy resolution also at room temperature. In this work we present the first experimental characterization of the CDD with synchrotron light in the range 8-30 keV carried out at Sincrotrone Trieste. Two-dimensional energy-resolved radiographic images carried out at a frame frequency up to 100 kHz are shown. Application of the CDD to elemental absorption contrast imaging is also presented. 1. Spatially resolving the very high energy emission from MGRO J2019+37 with VERITAS Energy Technology Data Exchange (ETDEWEB) Aliu, E.; Errando, M. [Department of Physics and Astronomy, Barnard College, Columbia University, NY 10027 (United States); Aune, T. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States); Behera, B.; Chen, X.; Federici, S. [DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Beilicke, M.; Buckley, J. H.; Bugaev, V. [Department of Physics, Washington University, St. Louis, MO 63130 (United States); Benbow, W.; Cerruti, M. [Fred Lawrence Whipple Observatory, Harvard-Smithsonian Center for Astrophysics, Amado, AZ 85645 (United States); Berger, K. [Department of Physics and Astronomy and the Bartol Research Institute, University of Delaware, Newark, DE 19716 (United States); Bird, R. [School of Physics, University College Dublin, Belfield, Dublin 4 (Ireland); Bouvier, A. [Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA 95064 (United States); Ciupik, L. [Astronomy Department, Adler Planetarium and Astronomy Museum, Chicago, IL 60605 (United States); Connolly, M. P. [School of Physics, National University of Ireland Galway, University Road, Galway (Ireland); Cui, W. [Department of Physics, Purdue University, West Lafayette, IN 47907 (United States); Dumm, J. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Dwarkadas, V. V. [Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637 (United States); Falcone, A., E-mail: ealiu@astro.columbia.edu, E-mail: nahee@uchicago.edu [Department of Astronomy and Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802 (United States); and others 2014-06-10 We present very high energy (VHE) imaging of MGRO J2019+37 obtained with the VERITAS observatory. The bright extended (∼2°) unidentified Milagro source is located toward the rich star formation region Cygnus-X. MGRO J2019+37 is resolved into two VERITAS sources. The faint, point-like source VER J2016+371 overlaps CTB 87, a filled-center remnant (SNR) with no evidence of a supernova remnant shell at the present time. Its spectrum is well fit in the 0.65-10 TeV energy range by a power-law model with photon index 2.3 ± 0.4. VER J2019+378 is a bright extended (∼1°) source that likely accounts for the bulk of the Milagro emission and is notably coincident with PSR J2021+3651 and the star formation region Sh 2–104. Its spectrum in the range 1-30 TeV is well fit with a power-law model of photon index 1.75 ± 0.3, among the hardest values measured in the VHE band, comparable to that observed near Vela-X. We explore the unusual spectrum and morphology in the radio and X-ray bands to constrain possible emission mechanisms for this source. 2. Spatially resolving the very high energy emission from MGRO J2019+37 with VERITAS International Nuclear Information System (INIS) Aliu, E.; Errando, M.; Aune, T.; Behera, B.; Chen, X.; Federici, S.; Beilicke, M.; Buckley, J. H.; Bugaev, V.; Benbow, W.; Cerruti, M.; Berger, K.; Bird, R.; Bouvier, A.; Ciupik, L.; Connolly, M. P.; Cui, W.; Dumm, J.; Dwarkadas, V. V.; Falcone, A. 2014-01-01 We present very high energy (VHE) imaging of MGRO J2019+37 obtained with the VERITAS observatory. The bright extended (∼2°) unidentified Milagro source is located toward the rich star formation region Cygnus-X. MGRO J2019+37 is resolved into two VERITAS sources. The faint, point-like source VER J2016+371 overlaps CTB 87, a filled-center remnant (SNR) with no evidence of a supernova remnant shell at the present time. Its spectrum is well fit in the 0.65-10 TeV energy range by a power-law model with photon index 2.3 ± 0.4. VER J2019+378 is a bright extended (∼1°) source that likely accounts for the bulk of the Milagro emission and is notably coincident with PSR J2021+3651 and the star formation region Sh 2–104. Its spectrum in the range 1-30 TeV is well fit with a power-law model of photon index 1.75 ± 0.3, among the hardest values measured in the VHE band, comparable to that observed near Vela-X. We explore the unusual spectrum and morphology in the radio and X-ray bands to constrain possible emission mechanisms for this source. 3. Investigation of Prolactin Receptor Activation and Blockade Using Time-Resolved Fluorescence Resonance Energy Transfer Directory of Open Access Journals (Sweden) Estelle eTallet 2011-09-01 Full Text Available The prolactin receptor (PRLR is emerging as a therapeutic target in oncology. Knowledge-based drug design led to the development of a pure PRLR antagonist (Del1-9-G129R-hPRL that was recently shown to prevent PRL-induced mouse prostate tumorogenesis. In humans, the first gain-of-function mutation of the PRLR (PRLRI146L was recently identified in breast tumor patients. At the molecular level, the actual mechanism of action of these two novel players in the PRL system remains elusive. In this study, we addressed whether constitutive PRLR activation (PRLRI146L or PRLR blockade (antagonist involved alteration of receptor oligomerization and/or of inter-chain distances compared to unstimulated and PRL-stimulated PRLR. Using a combination of various biochemical and spectroscopic approaches (co-IP, blue-native electrophoresis, BRET1, we demonstrated that preformed PRLR homodimers are altered neither by PRL- or I146L-induced receptor triggering, nor by antagonist-mediated blockade. These findings were confirmed using a novel time-resolved fluorescence resonance energy transfer (TR-FRET technology that allows monitoring distance changes between cell-surface tagged receptors. This technology revealed that PRLR blockade or activation did not involve detectable distance changes between extracellular domains of receptor chains within the dimer. This study merges with our previous structural investigations suggesting that the mechanism of PRLR activation solely involves intermolecular contact adaptations leading to subtle intramolecular rearrangements. 4. Fingerprinting ancient gold by measuring Pt with spatially resolved high energy Sy-XRF International Nuclear Information System (INIS) Guerra, M.F.; Calligaro, T.; Radtke, M.; Reiche, I.; Riesemeier, H. 2005-01-01 Trace elements of ancient gold such as Pt, give fundamental information on the circulation of the metal in the past. In the case of objects from the cultural heritage, the determination of trace elements requires non-destructive point analysis in general. These conditions and the need of good detection limits restrain the number of applicable analytical techniques. After the development of a PIXE set-up with a selective Cu or Zn filter of 75 μm and of a PIXE-XRF set-up using a primary target of As, we tested the possibilities of spatially resolved Sy-XRF to determine Pt in gold alloys. With a Zn filter, PIXE showed a detection limit of 1000 ppm in gold while PIXE-XRF lowers this detection limit down to 80 ppm. This last value being constrained by the resonant Raman effect produced on gold. In order to improve the detection limit of Pt keeping the non-destructiveness and access to point analysis, we developed an analytical protocol for XRF with synchrotron radiation at BESSY II, using the BAMline set-up. The L-lines of Pt were excited by a beam of energy above and below 11.564 keV and measured using a Si(Li) detector with a 50 μm Cu filter. A μ-beam of 100-250 μm 2 was used according to the size of the sample. The determination of the Pt content in the samples was carried out by Monte-Carlo simulation and subtraction of Au and Pt spectra obtained on pure standards. The limit of detection for Pt of 20 ppm was determined by using certified standards. The detection limits of a small set of other characteristic elements of gold were also measured using an incident energy of 33 keV 5. Solar Energy Education. Home economics: teacher's guide. Field test edition. [Includes glossary Energy Technology Data Exchange (ETDEWEB) 1981-06-01 An instructional aid is provided for home economics teachers who wish to integrate the subject of solar energy into their classroom activities. This teacher's guide was produced along with the student activities book for home economics by the US Department of Energy Solar Energy Education. A glossary of solar energy terms is included. (BCS) 6. Resolving issues at the Department of Energy/Oak Ridge Operations Facilities International Nuclear Information System (INIS) 1988-01-01 Waste management, like many other issues, has experienced major milestones. In 1971, the Calvert Cliff's decision resulted in an entirely different approach to the consideration of environmental impact analysis in reactor siting. The accidents at Three Mile Island and Chernobyl have had profound effects on nuclear power plant design. The high-level waste repository program has had many similar experiences that have modified the course of events. The management of radioactive, hazardous chemical and mixed waste in all of the facilities of the Oak Ridge Operations (ORO) Office of the Department of Energy (DOE) took on an entirely different meaning in 1984. On April 13, 1984, Federal Judge Robert Taylor said that DOE should proceed 'with all deliberate speed' to bring the Y-12 plant into compliance with the Resource Conservation and Recovery Act and the Clean Water Act. This decision resulted from a suit brought by the Legal Environmental Assistance Foundation (LEAF) and grew out of a continuing revelation of mercury spills and other problems related to the Oak Ridge plants of DOE. In this same time frame, other events occurred in Oak Ridge that would set the stage for major changes, to provide the supporting environment that allowed a very different and successful approach to resolving waste management issues at the DOE/ORO Facilities. This is the origin of the Oak Ridge Model which was recently adopted as the DOE Model. The concept is to assure that all stakeholders in waste management decisions have the opportunity to be participants from the first step. A discussion of many of the elements that have contributed to the success of the Model follows 7. Including Energy Efficiency and Renewable Energy Policies in Electricity Demand Projections Science.gov (United States) Find more information on how state and local air agencies can identify on-the-books EE/RE policies, develop a methodology for projecting a jurisdiction's energy demand, and estimate the change in power sector emissions. 8. Integrative taxonomy resolves the cryptic and pseudo-cryptic Radula buccinifera complex (Porellales, Jungermanniopsida, including two reinstated and five new species Directory of Open Access Journals (Sweden) Matt Renner 2013-10-01 Full Text Available Molecular data from three chloroplast markers resolve individuals attributable to Radula buccinifera in six lineages belonging to two subgenera, indicating the species is polyphyletic as currently circumscribed. All lineages are morphologically diagnosable, but one pair exhibits such morphological overlap that they can be considered cryptic. Molecular and morphological data justify the re-instatement of a broadly circumscribed ecologically variable R. strangulata, of R. mittenii, and the description of five new species. Two species Radula mittenii Steph. and R. notabilis sp. nov. are endemic to the Wet Tropics Bioregion of north-east Queensland, suggesting high diversity and high endemism might characterise the bryoflora of this relatively isolated wet-tropical region. Radula demissa sp. nov. is endemic to southern temperate Australasia, and like R. strangulata occurs on both sides of the Tasman Sea. Radula imposita sp. nov. is a twig and leaf epiphyte found in association with waterways in New South Wales and Queensland. Another species, R. pugioniformis sp. nov., has been confused with Radula buccinifera but was not included in the molecular phylogeny. Morphological data suggest it may belong to subg. Odontoradula. Radula buccinifera is endemic to Australia including Western Australia and Tasmania, and to date is known from south of the Clarence River on the north coast of New South Wales. Nested within R. buccinifera is a morphologically distinct plant from Norfolk Island described as R. anisotoma sp. nov. Radula australiana is resolved as monophyletic, sister to a species occurring in east coast Australian rainforests, and nesting among the R.buccinifera lineages with strong support. The molecular phylogeny suggests several long-distance dispersal events may have occurred. These include two east-west dispersal events from New Zealand to Tasmania and south-east Australia in R. strangulata, one east-west dispersal event from Tasmania to 9. Time-Resolved Fluorescence Anisotropy of Bicyclo[1.1.1]pentane/Tolane-Based Molecular Rods Included in Tris(o-phenylenedioxy)cyclotriphosphazene (TPP). Science.gov (United States) Cipolloni, Marco; Kaleta, Jiří; Mašát, Milan; Dron, Paul I; Shen, Yongqiang; Zhao, Ke; Rogers, Charles T; Shoemaker, Richard K; Michl, Josef 2015-04-23 We examine the fluorescence anisotropy of rod-shaped guests held inside the channels of tris( o -phenylenedioxy)cyclotriphosphazene (TPP) host nanocrystals, characterized by powder X-ray diffraction and solid state NMR spectroscopy. We address two issues: (i) are light polarization measurements on an aqueous colloidal solution of TPP nanocrystals meaningful, or is depolarization by scattering excessive? (ii) Can measurements of the rotational mobility of the included guests be performed at low enough loading levels to suppress depolarization by intercrystallite energy transfer? We find that meaningful measurements are possible and demonstrate that the long axis of molecular rods included in TPP channels performs negligible vibrational motion. 10. Economic Dispatch for Power System Included Wind and Solar Thermal Energy Directory of Open Access Journals (Sweden) Saoussen BRINI 2009-07-01 Full Text Available With the fast development of technologies of alternative energy, the electric power network can be composed of several renewable energy resources. The energy resources have various characteristics in terms of operational costs and reliability. In this study, the problem is the Economic Environmental Dispatching (EED of hybrid power system including wind and solar thermal energies. Renewable energy resources depend on the data of the climate such as the wind speed for wind energy, solar radiation and the temperature for solar thermal energy. In this article it proposes a methodology to solve this problem. The resolution takes account of the fuel costs and reducing of the emissions of the polluting gases. The resolution is done by the Strength Pareto Evolutionary Algorithm (SPEA method and the simulations have been made on an IEEE network test (30 nodes, 8 machines and 41 lines. 11. Energy-resolved photoemission studies of Be-containing surfaces for fusion; Energievariierte Photoemissionsstudien an berylliumhaltigen Oberflaechen fuer die Fusion Energy Technology Data Exchange (ETDEWEB) Koeppen, Martin 2013-02-04 Fusion research aims at the exploitation of the deuterium-tritium reaction for energy production. Next step on the roadmap is the construction of the experimental reactor ITER. The three elements beryllium, carbon and tungsten are to be used as armour materials for the vacuum vessel. After erosion due to plasma processes, these materials are transported and redeposited together with plasma impurities like oxygen from surface oxides. This leads to the formation of compounds by chemical reactions and diffusive processes, induced both by elevated temperatures and implantation of energetic particles. Due to the complexity of the induced surface processes, a method is required which is capable of both qualitative and quantitative analysis of the involved chemical species. X-ray photoelectron spectroscopy (XPS) provides the chemical analysis. Since diffusive processes play an important role in solid-state reactions, a depth-resolved method is required. In this work, energy-resolved XPS using synchrotron radiation with variable photon energies is extended towards a quantitative depth-resolved analysis. For the quantitative analysis a new model is derived which calculates the depth-resolved composition and the respective composition-dependent electron inelastic mean free path in a self-consistent way. Input is the XPS data which is normalized with all parameters influencing the photoelectron intensities. This fully quantitative model is applied to describe the interaction of energetic oxygen ions with the beryllium-tungsten alloy Be{sub 2}W. Oxygen ions from the plasma are able to interact with plasma facing materials. Formation of Be{sub 2}W is to be expected at the first wall and in the divertor region of ITER. Irradiation of this alloy leads to its decompositions. After decomposition, formation of beryllium oxide BeO is preferred compared to formation of tungsten oxides. Heating to 600K leads to chemical reduction of tungsten oxides. Metallic Be acts as reduction agent 12. Full genotyping of a highly polymorphic human gene trait by time-resolved fluorescence resonance energy transfer. Directory of Open Access Journals (Sweden) Edoardo Totè Full Text Available The ability of detecting the subtle variations occurring, among different individuals, within specific DNA sequences encompassed in highly polymorphic genes discloses new applications in genomics and diagnostics. DQB1 is a gene of the HLA-II DQ locus of the Human Leukocyte Antigens (HLA system. The polymorphisms of the trait of the DQB1 gene including codons 52-57 modulate the susceptibility to a number of severe pathologies. Moreover, the donor-receiver tissue compatibility in bone marrow transplantations is routinely assessed through crossed genotyping of DQB and DQA. For the above reasons, the development of rapid, reliable and cost-effective typing technologies of DQB1 in general, and more specifically of the codons 52-57, is a relevant although challenging task. Quantitative assessment of the fluorescence resonance energy transfer (FRET efficiency between chromophores labelling the opposite ends of gene-specific oligonucleotide probes has proven to be a powerful tool to type DNA polymorphisms with single-nucleotide resolution. The FRET efficiency can be most conveniently quantified by applying a time-resolved fluorescence analysis methodology, i.e. time-correlated single-photon counting, which allows working on very diluted template specimens and in the presence of fluorescent contaminants. Here we present a full in-vitro characterization of the fluorescence responses of two probes when hybridized to oligonucleotide mixtures mimicking all the possible genotypes of the codons 52-57 trait of DQB1 (8 homozygous and 28 heterozygous. We show that each genotype can be effectively tagged by the combination of the fluorescence decay constants extrapolated from the data obtained with such probes. 13. Characterization of spatially resolved high resolution x-ray spectrometers for high energy density physics and light source experiments. Science.gov (United States) Hill, K W; Bitter, M; Delgado-Aparacio, L; Efthimion, P; Pablant, N A; Lu, J; Beiersdorfer, P; Chen, H; Magee, E 2014-11-01 A high resolution 1D imaging x-ray spectrometer concept comprising a spherically bent crystal and a 2D pixelated detector is being optimized for diagnostics of small sources such as high energy density physics (HEDP) and synchrotron radiation or x-ray free electron laser experiments. This instrument is used on tokamak experiments for Doppler measurements of ion temperature and plasma flow velocity profiles. Laboratory measurements demonstrate a resolving power, E/ΔE of order 10,000 and spatial resolution better than 10 μm. Initial tests of the high resolution instrument on HEDP plasmas are being performed. 14. Measurement of the time-resolved reflection matrix for enhancing light energy delivery into a scattering medium. Science.gov (United States) Choi, Youngwoon; Hillman, Timothy R; Choi, Wonjun; Lue, Niyom; Dasari, Ramachandra R; So, Peter T C; Choi, Wonshik; Yaqoob, Zahid 2013-12-13 Multiple scatterings occurring in a turbid medium attenuate the intensity of propagating waves. Here, we propose a method to efficiently deliver light energy to the desired target depth in a scattering medium. We measure the time-resolved reflection matrix of a scattering medium using coherent time-gated detection. From this matrix, we derive and experimentally implement an incident wave pattern that optimizes the detected signal corresponding to a specific arrival time. This leads to enhanced light delivery at the target depth. The proposed method will lay a foundation for efficient phototherapy and deep-tissue in vivo imaging in the near future. 15. Identifying and Resolving Issues in EnergyPlus and DOE-2 Window Heat Transfer Calculations Energy Technology Data Exchange (ETDEWEB) Booten, C.; Kruis, N.; Christensen, C. 2012-08-01 Issues in building energy software accuracy are often identified by comparative, analytical, and empirical testing as delineated in the BESTEST methodology. As described in this report, window-related discrepancies in heating energy predictions were identified through comparative testing of EnergyPlus and DOE-2. Multiple causes for discrepancies were identified, and software fixes are recommended to better align the models with the intended algorithms and underlying test data. 16. Uphill energy transfer in photosystem I from Chlamydomonas reinhardtii. Time-resolved fluorescence measurements at 77 K. Science.gov (United States) Giera, Wojciech; Szewczyk, Sebastian; McConnell, Michael D; Redding, Kevin E; van Grondelle, Rienk; Gibasiewicz, Krzysztof 2018-04-04 Energetic properties of chlorophylls in photosynthetic complexes are strongly modulated by their interaction with the protein matrix and by inter-pigment coupling. This spectral tuning is especially striking in photosystem I (PSI) complexes that contain low-energy chlorophylls emitting above 700 nm. Such low-energy chlorophylls have been observed in cyanobacterial PSI, algal and plant PSI-LHCI complexes, and individual light-harvesting complex I (LHCI) proteins. However, there has been no direct evidence of their presence in algal PSI core complexes lacking LHCI. In order to determine the lowest-energy states of chlorophylls and their dynamics in algal PSI antenna systems, we performed time-resolved fluorescence measurements at 77 K for PSI core and PSI-LHCI complexes isolated from the green alga Chlamydomonas reinhardtii. The pool of low-energy chlorophylls observed in PSI cores is generally smaller and less red-shifted than that observed in PSI-LHCI complexes. Excitation energy equilibration between bulk and low-energy chlorophylls in the PSI-LHCI complexes at 77 K leads to population of excited states that are less red-shifted (by ~ 12 nm) than at room temperature. On the other hand, analysis of the detection wavelength dependence of the effective trapping time of bulk excitations in the PSI core at 77 K provided evidence for an energy threshold at ~ 675 nm, above which trapping slows down. Based on these observations, we postulate that excitation energy transfer from bulk to low-energy chlorophylls and from bulk to reaction center chlorophylls are thermally activated uphill processes that likely occur via higher excitonic states of energy accepting chlorophylls. 17. Angle-resolved energy distributions of laser ablated silver ions in vacuum DEFF Research Database (Denmark) Hansen, T.N.; Schou, Jørgen; Lunney, J.G. 1998-01-01 The energy distributions of ions ablated from silver in vacuum have been measured in situ for pulsed laser irradiation at 355 nm. We have determined the energy spectra for directions ranging from 5 degrees to 75 degrees with respect to the normal in the intensity range from 100 to 400 MW/cm(2... 18. How Consistent are Recent Variations in the Tropical Energy and Water Cycle Resolved by Satellite Measurements? Science.gov (United States) Robertson, F. R.; Lu, H.-I. 2004-01-01 One notable aspect of Earth's climate is that although the planet appears to be very close to radiative balance at top-of-atmosphere (TOA), the atmosphere itself and underlying surface are not. Profound exchanges of energy between the atmosphere and oceans, land and cryosphere occur over a range of time scales. Recent evidence from broadband satellite measurements suggests that even these TOA fluxes contain some detectable variations. Our ability to measure and reconstruct radiative fluxes at the surface and at the top of atmosphere is improving rapidly. One question is 'How consistent, physically, are these diverse remotely-sensed data sets'? The answer is of crucial importance to understanding climate processes, improving physical models, and improving remote sensing algorithms. In this work we will evaluate two recently released estimates of radiative fluxes, focusing primarily on surface estimates. The International Satellite Cloud Climatology Project 'FD' radiative flux profiles are available from mid-1983 to near present and have been constructed by driving the radiative transfer physics from the Goddard Institute for Space Studies (GISS) global model with ISCCP clouds and TOVS (TIROS Operational Vertical Sounder)thermodynamic profiles. Full and clear sky SW and LW fluxes are produced. A similar product from the NASA/GEWEX Surface Radiation Budget Project using different radiative flux codes and thermodynamics from the NASA/Goddard Earth Observing System (GEOS-1) assimilation model makes a similar calculation of surface fluxes. However this data set currently extends only through 1995. We also employ precipitation measurements from the Global Precipitation Climatology Project (GPCP) and the Tropical Rainfall Measuring Mission (TRMM). Finally, ocean evaporation estimates from the Special Sensor Microwave Imager (SSM/I) are considered as well as derived evaporation from the NCAR/NCEP Reanalysis. Additional information is included in the original extended 19. Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Energy Technology Data Exchange (ETDEWEB) Blair, Nate; Cory, Karlynn; Hand, Maureen; Parkhill, Linda; Speer, Bethany; Stehly, Tyler; Feldman, David; Lantz, Eric; Augusting, Chad; Turchi, Craig; O' Connor, Patrick 2015-07-08 Consistent cost and performance data for various electricity generation technologies can be difficult to find and may change frequently for certain technologies. With the Annual Technology Baseline (ATB), National Renewable Energy Laboratory provides an organized and centralized dataset that was reviewed by internal and external experts. It uses the best information from the Department of Energy laboratory's renewable energy analysts and Energy Information Administration information for conventional technologies. The ATB will be updated annually in order to provide an up-to-date repository of current and future cost and performance data. Going forward, we plan to revise and refine the values using best available information. The ATB includes both a presentation with notes (PDF) and an associated Excel Workbook. The ATB includes the following electricity generation technologies: land-based wind; offshore wind; utility-scale solar PV; concentrating solar power; geothermal power; hydropower plants (upgrades to existing facilities, powering non-powered dams, and new stream-reach development); conventional coal; coal with carbon capture and sequestration; integrated gasification combined cycle coal; natural gas combustion turbines; natural gas combined cycle; conventional biopower. Nuclear laboratory's renewable energy analysts and Energy Information Administration information for conventional technologies. The ATB will be updated annually in order to provide an up-to-date repository of current and future cost and performance data. Going forward, we plan to revise and refine the values using best available information. 20. Dose optimization for dual-energy contrast-enhanced digital mammography based on an energy-resolved photon-counting detector: A Monte Carlo simulation study Science.gov (United States) Lee, Youngjin; Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo 2017-03-01 Dual-energy contrast-enhanced digital mammography (CEDM) has been used to decompose breast images and improve diagnostic accuracy for tumor detection. However, this technique causes an increase of radiation dose and an inaccuracy in material decomposition due to the limitations of conventional X-ray detectors. In this study, we simulated the dual-energy CEDM with an energy-resolved photon-counting detector (ERPCD) for reducing radiation dose and improving the quantitative accuracy of material decomposition images. The ERPCD-based dual-energy CEDM was compared to the conventional dual-energy CEDM in terms of radiation dose and quantitative accuracy. The correlation between radiation dose and image quality was also evaluated for optimizing the ERPCD-based dual-energy CEDM technique. The results showed that the material decomposition errors of the ERPCD-based dual-energy CEDM were 0.56-0.67 times lower than those of the conventional dual-energy CEDM. The imaging performance of the proposed technique was optimized at the radiation dose of 1.09 mGy, which is a half of the MGD for a single view mammogram. It can be concluded that the ERPCD-based dual-energy CEDM with an optimal exposure level is able to improve the quality of material decomposition images as well as reduce radiation dose. 1. Time-resolved soft-x-ray studies of energy transport in layered and planar laser-driven targets International Nuclear Information System (INIS) 1982-01-01 New low-energy x-ray diagnostic techniques are used to explore energy-transport processes in laser heated plasmas. Streak cameras are used to provide 15-psec time-resolution measurements of subkeV x-ray emission. A very thin (50 μg/cm 2 ) carbon substrate provides a low-energy x-ray transparent window to the transmission photocathode of this soft x-ray streak camera. Active differential vacuum pumping of the instrument is required. The use of high-sensitivity, low secondary-electron energy-spread CsI photocathodes in x-ray streak cameras is also described. Significant increases in sensitivity with only a small and intermittant decrease in dynamic range were observed. These coherent, complementary advances in subkeV, time-resolved x-ray diagnostic capability are applied to energy-transport investigations of 1.06-μm laser plasmas. Both solid disk targets of a variety of Z's as well as Be-on-Al layered-disk targets were irradiated with 700-psec laser pulses of selected intensity between 3 x 10 14 W/cm 2 and 1 x 10 15 W/cm 2 2. Electrostatic mass spectrometer for concurrent mass-, energy- and angle-resolved measurements International Nuclear Information System (INIS) Golikov, Yu.K.; Krasnova, N.K. 1999-01-01 A new electron-optical scheme is considered. An energy- and mass-analyser with angular resolution are combined in one device, in which a time-of-flight principle of mass separation is used. The tool is created on the basis of electrostatic field of quasi-conical systems possessing the high-energy dispersion and high-angular resolution. A regime of simultaneous angular and energy resolution is found. If there is an ion-pulsed source then the ion groups of equal mass will be registered at the same time at a position-sensitive detector located at the edge of the field. Real parameters of the suggested scheme are calculated 3. The Potential Of Fission Nuclear Energy In Resolving Global Climate Change International Nuclear Information System (INIS) Pevec, D. 2015-01-01 There is an international consensus on the need of drastic reduction of carbon emission if very serious global climate changes are to be avoided. At present target is to limit global temperature increase to 2 Degrees of C and to keep CO 2 concentration below 450 ppm, though some recent request by climatologists argue for lower limit of 1.5 Degrees of C. The carbon emission reduction has to be done in the next few decades, as climate effects are essentially determined by integral emission. The integral emissions should not exceed 1000 Gt CO 2 to keep the probability of exceeding global temperature by 2 Degrees of C below 25 percent. Consequently, when we consider energy sources that could produce carbon free energy we have to concentrate on the period not later than 2060-2065. The sources that can take the burden of reduction in the years up to 2065 are Renewable Energy Sources (RES) and nuclear fission energy. The potential of RES has been estimated by many organizations and individuals. Their predictions indicate that RES are not likely to be sufficient to replace carbon emitters and fulfill the 2 Degrees of C limit requirements. The nuclear fission energy can give a very serious and hopefully timely (unlike nuclear fusion) contribution to reduction of emission. Even with proven conventional reactors using once through fuel cycle without fuel reprocessing the nuclear build-up in the years 2025-2065 could reach 3330 GW. With this concept nuclear contribution of 94.5 EJ/y would be reached by 2065, while integral CO 2 emission savings would be about 500 Gt CO 2 by 2065. This shows that essential nuclear contribution is possible without the use of plutonium and fast breeders, technology not ready for climate-critical next 50 years and not acceptable in present political environment. This nuclear fission energy contribution along with contributions from renewable sources, energy saving, and increased efficiency in energy use can solve the climate problems. (author). 4. Energy dissipation mechanism revealed by spatially resolved Raman thermometry of graphene/hexagonal boron nitride heterostructure devices Science.gov (United States) Kim, Daehee; Kim, Hanul; Yun, Wan Soo; Watanabe, Kenji; Taniguchi, Takashi; Rho, Heesuk; Bae, Myung-Ho 2018-04-01 Understanding the energy transport by charge carriers and phonons in two-dimensional (2D) van der Waals heterostructures is essential for the development of future energy-efficient 2D nanoelectronics. Here, we performed in situ spatially resolved Raman thermometry on an electrically biased graphene channel and its hBN substrate to study the energy dissipation mechanism in graphene/hBN heterostructures. By comparing the temperature profile along the biased graphene channel with that along the hBN substrate, we found that the thermal boundary resistance between the graphene and hBN was in the range of (1-2) ~ × 10-7 m2 K W-1 from ~100 °C to the onset of graphene break-down at ~600 °C in air. Consideration of an electro-thermal transport model together with the Raman thermometry conducted in air showed that a doping effect occurred under a strong electric field played a crucial role in the energy dissipation of the graphene/hBN device up to T ~ 600 °C. 5. A new cross-detection method for improved energy-resolving photon counting under pulse pile-up Science.gov (United States) Lee, Daehee; Lim, Kyung Taek; Park, Kyungjin; Lee, Changyeop; Cho, Gyuseong 2017-09-01 In recent, photon counting detectors (PCDs) have been replacing the energy-integrating detectors in many medical imaging applications due to the formers' high resolution, low noise, and high efficiency. Under a high flux X-ray exposure, however, a superimposition of pulses, i.e., pulse pile-up, frequently occurs due to the finite output pulse width, causing distortions in the energy spectrum as a consequence. Therefore, pulse pile-up is considered as a major constraint in using PCDs for high flux X-ray applications. In this study, a new photon counting method is proposed to minimize degradations in PCD performance due to pulse pile-up. The proposed circuit was incorporated into a pixel with a size of 200 × 200 μm2. It was fabricated by using a 1-poly 6-metal 0 . 18 μm complementary metal-oxide-semiconductor (CMOS) process and had a power consumption of 7 . 8 μW / pixel. From the result, it was shown that the maximum count rate of the proposed circuit was increased by a factor of 4.7 when compared to that of the conventional circuit at the same pulse width of 700 ns. This implies that the energy spectrum obtained by the proposed circuit is 4.7 times more resistant to distortions than the conventional energy-resolving circuit does under higher X-ray fluxes. 6. Resolving issues at the Department of Energy/Oak Ridge operations facilities International Nuclear Information System (INIS) 1988-01-01 The development of the US Department of Energy Oak Ridge Operations Office's model for waste management and its application in the Oak Ridge Reservation are discussed. The concept simply stated is to assure that all stakeholders in waste management decisions have the opportunity to be participants from the first step. The paper discusses the advisory committees involved in the process, subcontracting support, college and university relation, technology demonstrations and planning, other federal agency interaction, and the model meeting 7. Integrated Sachs-Wolfe effect in a quintessence cosmological model: Including anisotropic stress of dark energy International Nuclear Information System (INIS) Wang, Y. T.; Xu, L. X.; Gui, Y. X. 2010-01-01 In this paper, we investigate the integrated Sachs-Wolfe effect in the quintessence cold dark matter model with constant equation of state and constant speed of sound in dark energy rest frame, including dark energy perturbation and its anisotropic stress. Comparing with the ΛCDM model, we find that the integrated Sachs-Wolfe (ISW)-power spectrums are affected by different background evolutions and dark energy perturbation. As we change the speed of sound from 1 to 0 in the quintessence cold dark matter model with given state parameters, it is found that the inclusion of dark energy anisotropic stress makes the variation of magnitude of the ISW source uncertain due to the anticorrelation between the speed of sound and the ratio of dark energy density perturbation contrast to dark matter density perturbation contrast in the ISW-source term. Thus, the magnitude of the ISW-source term is governed by the competition between the alterant multiple of (1+3/2xc-circumflex s 2 ) and that of δ de /δ m with the variation of c-circumflex s 2 . 8. Energy-Water Nexus Relevant to Baseload Electricity Source Including Mini/Micro Hydropower Generation Science.gov (United States) Fujii, M.; Tanabe, S.; Yamada, M. 2014-12-01 Water, food and energy is three sacred treasures that are necessary for human beings. However, recent factors such as population growth and rapid increase in energy consumption have generated conflicting cases between water and energy. For example, there exist conflicts caused by enhanced energy use, such as between hydropower generation and riverine ecosystems and service water, between shale gas and ground water, between geothermal and hot spring water. This study aims to provide quantitative guidelines necessary for capacity building among various stakeholders to minimize water-energy conflicts in enhancing energy use. Among various kinds of renewable energy sources, we target baseload sources, especially focusing on renewable energy of which installation is required socially not only to reduce CO2 and other greenhouse gas emissions but to stimulate local economy. Such renewable energy sources include micro/mini hydropower and geothermal. Three municipalities in Japan, Beppu City, Obama City and Otsuchi Town are selected as primary sites of this study. Based on the calculated potential supply and demand of micro/mini hydropower generation in Beppu City, for example, we estimate the electricity of tens through hundreds of households is covered by installing new micro/mini hydropower generation plants along each river. However, the result is based on the existing infrastructures such as roads and electric lines. This means that more potentials are expected if the local society chooses options that enhance the infrastructures to increase micro/mini hydropower generation plants. In addition, further capacity building in the local society is necessary. In Japan, for example, regulations by the river law and irrigation right restrict new entry by actors to the river. Possible influences to riverine ecosystems in installing new micro/mini hydropower generation plants should also be well taken into account. Deregulation of the existing laws relevant to rivers and 9. Predicting Automotive Interior Noise Including Wind Noise by Statistical Energy Analysis OpenAIRE Yoshio Kurosawa 2017-01-01 The applications of soundproof materials for reduction of high frequency automobile interior noise have been researched. This paper presents a sound pressure prediction technique including wind noise by Hybrid Statistical Energy Analysis (HSEA) in order to reduce weight of acoustic insulations. HSEA uses both analytical SEA and experimental SEA. As a result of chassis dynamo test and road test, the validity of SEA modeling was shown, and utility of the method was confirmed. 10. Resolving issues with environmental impact assessment of marine renewable energy installations Directory of Open Access Journals (Sweden) Ilya M. D. Maclean 2014-12-01 Full Text Available Growing concerns about climate change and energy security have fueled a rapid increase in the development of marine renewable energy installations (MREIs. The potential ecological consequences of increased use of these devices emphasizes the need for high quality environmental impact assessment (EIA. We demonstrate that these processes are hampered severely, primarily because ambiguities in the legislation and lack of clear implementation guidance are such that they do not ensure robust assessment of the significance of impacts and cumulative effects. We highlight why the regulatory framework leads to conceptual ambiguities and propose changes which, for the most part, do not require major adjustments to standard practice. We emphasize the importance of determining the degree of confidence in impacts to permit the likelihood as well as magnitude of impacts to be quantified and propose ways in which assessment of population-level impacts could be incorporated into the EIA process. Overall, however, we argue that, instead of trying to ascertain which particular developments are responsible for tipping an already heavily degraded marine environment into an undesirable state, emphasis should be placed on better strategic assessment. 11. Dose optimization for dual-energy contrast-enhanced digital mammography based on an energy-resolved photon-counting detector: A Monte Carlo simulation study International Nuclear Information System (INIS) Lee, Youngjin; Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo 2017-01-01 Dual-energy contrast-enhanced digital mammography (CEDM) has been used to decompose breast images and improve diagnostic accuracy for tumor detection. However, this technique causes an increase of radiation dose and an inaccuracy in material decomposition due to the limitations of conventional X-ray detectors. In this study, we simulated the dual-energy CEDM with an energy-resolved photon-counting detector (ERPCD) for reducing radiation dose and improving the quantitative accuracy of material decomposition images. The ERPCD-based dual-energy CEDM was compared to the conventional dual-energy CEDM in terms of radiation dose and quantitative accuracy. The correlation between radiation dose and image quality was also evaluated for optimizing the ERPCD-based dual-energy CEDM technique. The results showed that the material decomposition errors of the ERPCD-based dual-energy CEDM were 0.56–0.67 times lower than those of the conventional dual-energy CEDM. The imaging performance of the proposed technique was optimized at the radiation dose of 1.09 mGy, which is a half of the MGD for a single view mammogram. It can be concluded that the ERPCD-based dual-energy CEDM with an optimal exposure level is able to improve the quality of material decomposition images as well as reduce radiation dose. - Highlights: • Dual-energy mammography based on a photon-counting detector was simulated. • Radiation dose and image quality were evaluated for optimizing the proposed technique. • The proposed technique reduced radiation dose as well as improved image quality. • The proposed technique was optimized at the radiation dose of 1.09 mGy. 12. Energy-resolved visibility analysis of grating interferometers operated at polychromatic X-ray sources. Science.gov (United States) Hipp, A; Willner, M; Herzen, J; Auweter, S; Chabior, M; Meiser, J; Achterhold, K; Mohr, J; Pfeiffer, F 2014-12-15 Grating interferometry has been successfully adapted at standard X-ray tubes and is a promising candidate for a broad use of phase-contrast imaging in medical diagnostics or industrial testing. The achievable image quality using this technique is mainly dependent on the interferometer performance with the interferometric visibility as crucial parameter. The presented study deals with experimental investigations of the spectral dependence of the visibility in order to understand the interaction between the single contributing energies. Especially for the choice which type of setup has to be preferred using a polychromatic source, this knowledge is highly relevant. Our results affirm previous findings from theoretical investigations but also show that measurements of the spectral contributions to the visibility are necessary to fully characterize and optimize a grating interferometer and cannot be replaced by only relying on simulated data up to now. 13. Phase-resolved fluid dynamic forces of a flapping foil energy harvester based on PIV measurements Science.gov (United States) Liburdy, James 2017-11-01 Two-dimensional particle image velocimetry measurements are performed in a wind tunnel to evaluate the spatial and temporal fluid dynamic forces acting on a flapping foil operating in the energy harvesting regime. Experiments are conducted at reduced frequencies (k = fc/U) of 0.05 - 0.2, pitching angle of, and heaving amplitude of A / c = 0.6. The phase-averaged pressure field is obtained by integrating the pressure Poisson equation. Fluid dynamic forces are then obtained through the integral momentum equation. Results are compared with a simple force model based on the concept of flow impulse. These results help to show the detailed force distributions, their transient nature and aide in understanding the impact of the fluid flow structures that contribute to the power production. 14. Algorithms for spectral calibration of energy-resolving small-pixel detectors International Nuclear Information System (INIS) Scuffham, J; Veale, M C; Wilson, M D; Seller, P 2013-01-01 Small pixel Cd(Zn)Te detectors often suffer from inter-pixel variations in gain, resulting in shifts in the individual energy spectra. These gain variations are mainly caused by inclusions and defects within the crystal structure, which affect the charge transport within the material causing a decrease in the signal pulse height. In imaging applications, spectra are commonly integrated over a particular peak of interest. This means that the individual pixels must be accurately calibrated to ensure that the same portion of the spectrum is integrated in every pixel. The development of large-area detectors with fine pixel pitch necessitates automated algorithms for this spectral calibration, due to the very large number of pixels. Algorithms for automatic spectral calibration require accurate determination of characteristic x-ray or photopeak positions on a pixelwise basis. In this study, we compare two peak searching spectral calibration algorithms for a small-pixel CdTe detector in gamma spectroscopic imaging. The first algorithm uses rigid search ranges to identify peaks in each pixel spectrum, based on the average peak positions across all pixels. The second algorithm scales the search ranges on the basis of the position of the highest-energy peak relative to the average across all pixels. In test spectra acquired with Tc-99m, we found that the rigid search algorithm failed to correctly identify the target calibraton peaks in up to 4% of pixels. In contrast, the scaled search algorithm failed in only 0.16% of pixels. Failures in the scaled search algorithm were attributed to the presence of noise events above the main photopeak, and possible non-linearities in the spectral response in a small number of pixels. We conclude that a peak searching algorithm based on scaling known peak spacings is simple to implement and performs well for the spectral calibration of pixellated radiation detectors 15. Performancpe profiles of major energy producers, 1977. [Using EIA Financial Reporting System; 26 companies; includes glossary Energy Technology Data Exchange (ETDEWEB) 1980-01-01 This volume is the first report of the Financial Reporting System (FRS). The finances and economics of energy production are the main subjects addressed by the data gathered. Much information already exists because of the largest firms are publicly held and file reports with the SEC. Useful as these reports are, they leave much to be desired as an account of the financial and economic aspects of the energy industry in the United States. Chapter 2 compares the 26 companies reporting to the FRS with a broad index of companies which includes energy companies and other non-energy industrial companies. The comparisons are at the aggregated consolidated company level where public information is available. In Chapter 3, characteristics of the industrial financial structure are reviewed in the context of the FRS reporting framework. Data on horizontal diversification are presented to permit review of existing patterns and evident directions of change, as well as the relation of these patterns to firm and segment profitability. In Chapter 4, profits, new investments, and the composition of net investment in place are described by FRS size groupings. Chapter 5 traces oil and gas resource-development efforts in 1977. Data on resource-development expenditures are complemented by data on reserve holdings, changes in reserves, and characteristics of exploration and development effort. Foreign activity is compared with domestic. Chapter 6 deals specifically with crude and refined-product production and distribution. 16. Spatially resolved quantification of agrochemicals on plant surfaces using energy dispersive X-ray microanalysis. Science.gov (United States) Hunsche, Mauricio; Noga, Georg 2009-12-01 In the present study the principle of energy dispersive X-ray microanalysis (EDX), i.e. the detection of elements based on their characteristic X-rays, was used to localise and quantify organic and inorganic pesticides on enzymatically isolated fruit cuticles. Pesticides could be discriminated from the plant surface because of their distinctive elemental composition. Findings confirm the close relation between net intensity (NI) and area covered by the active ingredient (AI area). Using wide and narrow concentration ranges of glyphosate and glufosinate, respectively, results showed that quantification of AI requires the selection of appropriate regression equations while considering NI, peak-to-background (P/B) ratio, and AI area. The use of selected internal standards (ISs) such as Ca(NO(3))(2) improved the accuracy of the quantification slightly but led to the formation of particular, non-typical microstructured deposits. The suitability of SEM-EDX as a general technique to quantify pesticides was evaluated additionally on 14 agrochemicals applied at diluted or regular concentration. Among the pesticides tested, spatial localisation and quantification of AI amount could be done for inorganic copper and sulfur as well for the organic agrochemicals glyphosate, glufosinate, bromoxynil and mancozeb. (c) 2009 Society of Chemical Industry. 17. Full-dimensional diabatic potential energy surfaces including dissociation: the ²E″ state of NO₃. Science.gov (United States) Eisfeld, Wolfgang; Vieuxmaire, Olivier; Viel, Alexandra 2014-06-14 A scheme to produce accurate full-dimensional coupled diabatic potential energy surfaces including dissociative regions and suitable for dynamical calculations is proposed. The scheme is successfully applied to model the two-sheeted surface of the (2)E″ state of the NO3 radical. An accurate potential energy surface for the NO₃⁻ anion ground state is developed as well. Both surfaces are based on high-level ab initio calculations. The model consists of a diabatic potential matrix, which is expanded to higher order in terms of symmetry polynomials of symmetry coordinates. The choice of coordinates is key for the accuracy of the obtained potential energy surfaces and is discussed in detail. A second central aspect is the generation of reference data to fit the expansion coefficients of the model for which a stochastic approach is proposed. A third ingredient is a new and simple scheme to handle problematic regions of the potential energy surfaces, resulting from the massive undersampling by the reference data unavoidable for high-dimensional problems. The final analytical diabatic surfaces are used to compute the lowest vibrational levels of NO₃⁻ and the photo-electron detachment spectrum of NO₃⁻ leading to the neutral radical in the (2)E″ state by full dimensional multi-surface wave-packet propagation for NO3 performed using the Multi-Configuration Time Dependent Hartree method. The achieved agreement of the simulations with available experimental data demonstrates the power of the proposed scheme and the high quality of the obtained potential energy surfaces. 18. Expressions For Total Energy And Relativistic Kinetic Energy At Low Speeds In Special Relativity Must Include Rotational And Vibrational As Well As Linear Kinetic Energies Science.gov (United States) Brekke, Stewart 2017-09-01 Einstein calculated the total energy at low speeds in the Special Theory of Relativity to be Etotal =m0c2 + 1 / 2m0v2 . However, the total energy must include the rotational and vibrational kinetic energies as well as the linear kinetic energies. If 1 / 2 Iω2 is the expression for the rotational kinetic energy of mass and 1 / 2 kx02 is the vibrational kinetic energy expression of a typical mass, the expression for the total energy of a mass at low speeds must be Etotal =m0c2 + 1 / 2m0v2 + 1 / 2 Iω2 + 1 / 2 kx02 . If this expression is correct, the relativistic kinetic energy of a mass. at low speeds must include the rotational and vibrational kinetic energies as well as the linear kinetic energies since according to Einstein K = (m -m0) c2 and therefore, K = 1 / 2m0v2 + 1 / 2 Iω2 + 1 / 2 kx02 . 19. Development of a Schottky CdTe Medipix3RX hybrid photon counting detector with spatial and energy resolving capabilities Energy Technology Data Exchange (ETDEWEB) Gimenez, E.N., E-mail: Eva.Gimenez@diamond.ac.uk [Diamond Light Source, Harwell Campus, Oxforshire OX11 0DE (United Kingdom); Astromskas, V. [University of Surrey (United Kingdom); Horswell, I.; Omar, D.; Spiers, J.; Tartoni, N. [Diamond Light Source, Harwell Campus, Oxforshire OX11 0DE (United Kingdom) 2016-07-11 A multichip CdTe-Medipix3RX detector system was developed in order to bring the advantages of photon-counting detectors to applications in the hard X-ray range of energies. The detector head consisted of 2×2 Medipix3RX ASICs bump-bonded to a 28 mm×28 mm e{sup −} collection Schottky contact CdTe sensor. Schottky CdTe sensors undergo performance degrading polarization which increases with temperature, flux and the longer the HV is applied. Keeping the temperature stable and periodically refreshing the high voltage bias supply was used to minimize the polarization and achieve a stable and reproducible detector response. This leads to good quality images and successful results on the energy resolving capabilities of the system. - Highlights: • A high atomic number (CdTe sensor based) photon-counting detector was developed. • Polarization effects affected the image were minimized by regularly refreshing the bias voltage and stabilizing the temperature. • Good spatial resolution and image quality was achieved following this procedure. 20. Energy-gap dynamics of a superconductor NbN studied by time-resolved terahertz spectroscopy Energy Technology Data Exchange (ETDEWEB) Beck, Matthias; Leiderer, Paul [Dept. of Physics and Center for Appl. Photonics, Univ. of Konstanz (Germany); Kabanov, Viktor V. [Zukunftskolleg, Univ. of Konstanz, (Germany); Gol' tsman, Gregory [Moscow State Ped. Univ., Moscow (Russian Federation); Helm, Manfred [Helmholtz-Zentrum, Dresden-Rossendorf (Germany); Demsar, Jure [Dept. of Physics and Center for Appl. Photonics, Univ. of Konstanz (Germany); Zukunftskolleg, Univ. of Konstanz, (Germany) 2012-07-01 Using time-resolved terahertz (THz) spectroscopy we performed direct studies of the photoinduced suppression and recovery of the SC gap in a conventional SC NbN. Both processes are found to be strongly temperature and excitation density dependent. The analysis of the data with the established phenomenological Rothwarf-Taylor model enabled us to determine the important microscopic constants: the Cooper pair-breaking rate via phonon absorption and the bare quasiparticle recombination rate. From the latter we were able to extract the dimensionless electron-phonon coupling constant, {lambda}=1.1{+-}0.1, in excellent agreement with theoretical estimates. The technique also allowed us to determine the absorbed energy required to suppress SC, which in NbN equals the thermodynamic condensation energy (in cuprates the two differ by an order of magnitude). Finally, we present the first studies of dynamics following resonant excitation with intense narrow band THz pulses tuned to above and below the superconducting gap. These suggest an additional process, particularly pronounced near T{sub c}, that could be attributed to amplification of SC via effective quasiparticle cooling. 1. Development of wide-band, time and energy resolving, optical photon detectors with application to imaging astronomy International Nuclear Information System (INIS) Miller, A.J.; Cabrera, B.; Romani, R.W.; Figueroa-Feliciano, E.; Nam, S.W.; Clarke, R.M. 2000-01-01 Superconducting transition edge sensors (TESs) are showing promise for the wide-band spectroscopy of individual photons from the mid-infrared (IR), through the optical, and into the near ultraviolet (UV). Our TES sensors are ∼20 μm square, 40 nm thick tungsten (W) films with a transition temperature of about 80 mK. We typically attain an energy resolution of 0.15 eV FWHM over the optical range with relative timing resolution of 100 ns. Single photon events with sub-microsecond risetimes and few microsecond falltimes have been achieved allowing count rates in excess of 30 kHz per pixel. Additionally, tungsten is approximately 50% absorptive in the optical (dropping to 10% in the IR) giving these devices an intrinsically high quantum efficiency. These combined traits make our detectors attractive for fast spectrophotometers and photon-starved applications such as wide-band, time and energy resolved astronomical observations. We present recent results from our work toward the fabrication and testing of the first TES optical photon imaging arrays 2. Calorimetric low-temperature detectors on semiconductor base for the energy-resolving detection of heavy ions International Nuclear Information System (INIS) Kienlin, A. von. 1994-01-01 In the framework of this thesis for the first time calorimetric low-temperature detectors for the energy-resolving detection of heavy ions were developed and successfully applied. Constructed were two different detector types, which work both with a semiconductor thermistor. The temperature increasement effected by a particle incidence is read out. In the first detector type the thermistor was simutaneously used as absorber. The thickness of the germanium crystals was sufficient in order to stop the studied heavy ions completely. In the second type, a composed calorimeter, a sapphire crystal, which was glued on a germanium thermistor, served as absorber for the incident heavy ions. The working point of the calorimeter lies in the temperature range (1.2-4.2 K), which is reachable with a pumped 4 He cryostat. The temperatur increasement of the calorimeter amounts after the incidence of a single α particle about 20-30 μK and that after a heavy ion incidence up to some mK. An absolute energy resolution of 400-500 keV was reached. In nine beam times the calorimeters were irradiated by heavy ions ( 20 Ne, 40 Ar, 136 Xe, 208 Pb, 209 Bi) of different energies (3.6 MeV/nucleon< E<12.5 MeV/nucleon) elastically scattered from gold foils. In the pulse height spectra of the first detector type relatively broad, complex-structurated line shapes were observed. By systematic measurements dependences of the complex line structures on operational parameters of the detector, the detector temperature, and the position of the incident particle could be detected. Together with the results of further experiments a possible interpretation of these phenomena is presented. Contrarily to the complex line structures of the pure germanium thermistor the line shapes in the pulse height spectra, which were taken up in a composite germanium/sapphire calorimeter, are narrow and Gauss-shaped 3. 78 FR 34372 - TGP Energy Management, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2013-06-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-1586-000] TGP Energy Management, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding, of TGP Energy... 4. 75 FR 10245 - S.J. Energy Partners, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-03-05 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER10-735-000] S.J. Energy Partners, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... proceeding of S.J. Energy Partners, Inc.'s application for market-based rate authority, with an accompanying... 5. 75 FR 37430 - Plymouth Rock Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-06-29 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER10-1470-000] Plymouth Rock Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... of Plymouth Rock Energy, LLC.'s application for market-based rate authority, with an accompanying... 6. 78 FR 54464 - ABC Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Science.gov (United States) 2013-09-04 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2260-000] ABC Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding, of ABC Energy, LLC... 7. 77 FR 64980 - Collegiate Clean Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-10-24 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-33-000] Collegiate Clean Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Collegiate Clean Energy, LLC's application for market-based rate authority, with an accompanying rate tariff... 8. 76 FR 19351 - Stream Energy Maryland, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-04-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-3188-000] Stream Energy Maryland, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding Stream Energy... 9. 76 FR 69267 - Stream Energy New Jersey, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-11-08 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-225-000] Stream Energy New Jersey, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Stream Energy New Jersey, LLC's application for market-based rate authority, with an accompanying rate... 10. 77 FR 47625 - Beebe Renewable Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-08-09 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-2311-000] Beebe Renewable Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Beebe Renewable Energy, LLC's application for market-based rate authority, with an accompanying rate... 11. 78 FR 34371 - Centinela Solar Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2013-06-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-1561-000] Centinela Solar Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Centinela Solar Energy, LLC's application for market-based rate authority, with an accompanying rate... 12. 77 FR 47625 - Laurel Hill Wind Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-08-09 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-2313-000] Laurel Hill Wind Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request For... Laurel Hill Wind Energy, LLC's application for market-based rate authority, with an accompanying rate... 13. 75 FR 10245 - DPL Energy Resources, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-03-05 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER10-726-000] DPL Energy Resources, Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request For Blanket... proceeding of DPL Energy Resources, Inc.'s application for market-based rate authority, with an accompanying... 14. 75 FR 74711 - Planet Energy (USA) Corp.; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-12-01 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-2166-000] Planet Energy (USA) Corp.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... proceeding, of Planet Energy (USA) Corp.'s application for market-based rate authority, with an accompanying... 15. 78 FR 55250 - TEC Energy Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Science.gov (United States) 2013-09-10 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-2304-000] TEC Energy Inc.; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding, of TEC Energy Inc... 16. 75 FR 359 - Google Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Science.gov (United States) 2010-01-05 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER10-468-000] Google Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section... of Google Energy LLC's application for market-based rate authority, with an accompanying rate tariff... 17. 77 FR 21555 - Flat Ridge 2 Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-04-10 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-1400-000] Flat Ridge 2 Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Wind Energy LLC's application for market-based rate authority, with an accompanying rate tariff, noting... 18. 75 FR 18202 - Vantage Wind Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-04-09 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER10-956-000] Vantage Wind Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... of Vantage Wind Energy, LLC's application for market-based rate authority, with an accompanying rate... 19. 77 FR 41400 - Mehoopany Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-07-13 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-2200-000] Mehoopany Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Wind Energy LLC's application for market-based rate authority, with an accompanying rate tariff, noting... 20. 76 FR 6614 - Elk Wind Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request... Science.gov (United States) 2011-02-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-2765-000] Elk Wind Energy, LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket... proceeding of Elk Wind Energy, LLC's application for market-based rate authority, with an accompanying rate... 1. Privacy-preserving smart meter control strategy including energy storage losses OpenAIRE Avula, Chinni Venkata Ramana R.; Oechtering, Tobias J.; Månsson, Daniel 2018-01-01 Privacy-preserving smart meter control strategies proposed in the literature so far make some ideal assumptions such as instantaneous control without delay, lossless energy storage systems etc. In this paper, we present a one-step-ahead predictive control strategy using Bayesian risk to measure and control privacy leakage with an energy storage system. The controller estimates energy state using a three-circuit energy storage model to account for steady-state energy losses. With numerical exp... 2. Protein activation dynamics in cells and tumor micro arrays assessed by time resolved Förster resonance energy transfer. Science.gov (United States) Calleja, Véronique; Leboucher, Pierre; Larijani, Banafshé 2012-01-01 Analytical time resolved Förster resonance energy transfer (FRET) can be exploited for assessing, in cells and tumor micro arrays, the activation status and dynamics of oncoproteins such as epidermal growth factor receptor (EGFR1) and their downstream effectors such as protein kinase B (PKB) and 3-phosphoinositide-dependent protein kinase 1 (PDK1). The outcome of our research involving the application of quantitative imaging for investigating molecular mechanisms of phosphoinositide-dependant enzymes, such as PKB and PDK1, has resulted in a refined model describing the dynamics and regulation of these two oncoproteins in live cells. Our translational research exploits a quantitative FRET method for establishing the activation status of predictive biomarkers in tumor micro arrays. We developed a two-site FRET assay monitored by automated frequency domain Fluorescence lifetime imaging microscopy (FLIM). As a proof of principle, we tested our methodology by assessing EGFR1 activation status in tumor micro arrays from head and neck patients. Our two-site FRET assay, by high-throughput frequency domain FLIM, has great potential to provide prognostic and perhaps predictive biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved. 3. The Energy-Water Nexus: Spatially-Resolved Analysis of the Potential for Desalinating Brackish Groundwater by Use of Solar Energy Directory of Open Access Journals (Sweden) Jill B. Kjellsson 2015-06-01 Full Text Available This research looks at coupling desalination with renewable energy sources to create a high-value product (treated water from two low value resources (brackish groundwater and intermittent solar energy. Desalination of brackish groundwater is already being considered as a potential new water supply in Texas. This research uses Texas as a testbed for spatially-resolved analysis techniques while considering depth to brackish groundwater, water quality, and solar radiation across Texas to determine the locations with the best potential for integrating solar energy with brackish groundwater desalination. The framework presented herein can be useful for policymakers, regional planners, and project developers as they consider where to site desalination facilities coupled with solar photovoltaics. Results suggest that the northwestern region of Texas—with abundant sunshine and groundwater at relatively shallow depths and low salinity in areas with freshwater scarcity—has the highest potential for solar powered desalination. The range in capacity for solar photovoltaic powered reverse osmosis desalination was found to be 1.56 × 10—6 to 2.93 × 10—5 cubic meters of water per second per square meter of solar panel (m3/s/m2. 4. Energy star compliant voice over internet protocol (VoIP) telecommunications network including energy star compliant VoIP devices Energy Technology Data Exchange (ETDEWEB) 2012-11-06 A Voice over Internet Protocol (VoIP) communications system, a method of managing a communications network in such a system and a program product therefore. The system/network includes an ENERGY STAR (E-star) aware softswitch and E-star compliant communications devices at system endpoints. The E-star aware softswitch allows E-star compliant communications devices to enter and remain in power saving mode. The E-star aware softswitch spools messages and forwards only selected messages (e.g., calls) to the devices in power saving mode. When the E-star compliant communications devices exit power saving mode, the E-star aware softswitch forwards spooled messages. 5. Non-Destructive Study of Bulk Crystallinity and Elemental Composition of Natural Gold Single Crystal Samples by Energy-Resolved Neutron Imaging Science.gov (United States) Tremsin, Anton S.; Rakovan, John; Shinohara, Takenao; Kockelmann, Winfried; Losko, Adrian S.; Vogel, Sven C. 2017-01-01 Energy-resolved neutron imaging enables non-destructive analyses of bulk structure and elemental composition, which can be resolved with high spatial resolution at bright pulsed spallation neutron sources due to recent developments and improvements of neutron counting detectors. This technique, suitable for many applications, is demonstrated here with a specific study of ~5–10 mm thick natural gold samples. Through the analysis of neutron absorption resonances the spatial distribution of palladium (with average elemental concentration of ~0.4 atom% and ~5 atom%) is mapped within the gold samples. At the same time, the analysis of coherent neutron scattering in the thermal and cold energy regimes reveals which samples have a single-crystalline bulk structure through the entire sample volume. A spatially resolved analysis is possible because neutron transmission spectra are measured simultaneously on each detector pixel in the epithermal, thermal and cold energy ranges. With a pixel size of 55 μm and a detector-area of 512 by 512 pixels, a total of 262,144 neutron transmission spectra are measured concurrently. The results of our experiments indicate that high resolution energy-resolved neutron imaging is a very attractive analytical technique in cases where other conventional non-destructive methods are ineffective due to sample opacity. PMID:28102285 6. Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks Science.gov (United States) Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M. 2017-12-01 In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices. 7. Calculations of environmental benefits from using geothermal energy must include the rebound effect DEFF Research Database (Denmark) 2017-01-01 and energy production patterns are simulated using data from countries with similar environmental conditions but do not use geothermal or hydropower to the same extent as Iceland. Because of the rapid shift towards renewable energy and exclusion of external energy provision, the country is considered... 8. Hybrid Design of Electric Power Generation Systems Including Renewable Sources of Energy Science.gov (United States) Wang, Lingfeng; Singh, Chanan 2008-01-01 With the stricter environmental regulations and diminishing fossil-fuel reserves, there is now higher emphasis on exploiting various renewable sources of energy. These alternative sources of energy are usually environmentally friendly and emit no pollutants. However, the capital investments for those renewable sources of energy are normally high,… 9. Probing dark energy with cluster counts and cosmic shear power spectra: including the full covariance International Nuclear Information System (INIS) 2007-01-01 Several dark energy experiments are available from a single large-area imaging survey and may be combined to improve cosmological parameter constraints and/or test inherent systematics. Two promising experiments are cosmic shear power spectra and counts of galaxy clusters. However, the two experiments probe the same cosmic mass density field in large-scale structure, therefore the combination may be less powerful than first thought. We investigate the cross-covariance between the cosmic shear power spectra and the cluster counts based on the halo model approach, where the cross-covariance arises from the three-point correlations of the underlying mass density field. Fully taking into account the cross-covariance, as well as non-Gaussian errors on the lensing power spectrum covariance, we find a significant cross-correlation between the lensing power spectrum signals at multipoles l∼10 3 and the cluster counts containing halos with masses M∼>10 14 M o-dot . Including the cross-covariance for the combined measurement degrades and in some cases improves the total signal-to-noise (S/N) ratios up to ∼±20% relative to when the two are independent. For cosmological parameter determination, the cross-covariance has a smaller effect as a result of working in a multi-dimensional parameter space, implying that the two observables can be considered independent to a good approximation. We also discuss the fact that cluster count experiments using lensing-selected mass peaks could be more complementary to cosmic shear tomography than mass-selected cluster counts of the corresponding mass threshold. Using lensing selected clusters with a realistic usable detection threshold ((S/N) cluster ∼6 for a ground-based survey), the uncertainty on each dark energy parameter may be roughly halved by the combined experiments, relative to using the power spectra alone 10. Optimization of piezoelectric cantilever energy harvesters including non-linear effects International Nuclear Information System (INIS) Patel, R; McWilliam, S; Popov, A A 2014-01-01 This paper proposes a versatile non-linear model for predicting piezoelectric energy harvester performance. The presented model includes (i) material non-linearity, for both substrate and piezoelectric layers, and (ii) geometric non-linearity incorporated by assuming inextensibility and accurately representing beam curvature. The addition of a sub-model, which utilizes the transfer matrix method to predict eigenfrequencies and eigenvectors for segmented beams, allows for accurate optimization of piezoelectric layer coverage. A validation of the overall theoretical model is performed through experimental testing on both uniform and non-uniform samples manufactured in-house. For the harvester composition used in this work, the magnitude of material non-linearity exhibited by the piezoelectric layer is 35 times greater than that of the substrate layer. It is also observed that material non-linearity, responsible for reductions in resonant frequency with increases in base acceleration, is dominant over geometric non-linearity for standard piezoelectric harvesting devices. Finally, over the tested range, energy loss due to damping is found to increase in a quasi-linear fashion with base acceleration. During an optimization study on piezoelectric layer coverage, results from the developed model were compared with those from a linear model. Unbiased comparisons between harvesters were realized by using devices with identical natural frequencies—created by adjusting the device substrate thickness. Results from three studies, each with a different assumption on mechanical damping variations, are presented. Findings showed that, depending on damping variation, a non-linear model is essential for such optimization studies with each model predicting vastly differing optimum configurations. (paper) 11. Proceedings of the Wind Energy and Birds/Bats Workshop: Understanding and Resolving Bird and Bat Impacts Energy Technology Data Exchange (ETDEWEB) Schwartz, Susan Savitt (ed.) 2004-09-01 Most conservation groups support the development of wind energy in the US as an alternative to fossil and nuclear-fueled power plants to meet growing demand for electrical energy. However, concerns have surfaced over the potential threat to birds, bats, and other wildlife from the construction and operation of wind turbine facilities. Co-sponsored by the American Bird Conservancy (ABC) and the American Wind Energy Association (AWEA), the Wind Energy and Birds/Bats Workshop was convened to examine current research on the impacts of wind energy development on avian and bat species and to discuss the most effective ways to mitigate such impacts. On 18-19 May 2004, 82 representatives from government, non-government organizations, private business, and academia met to (1) review the status of the wind industry and current project development practices, including pre-development risk assessment and post-construction monitoring; (2) learn what is known about direct, indirect (habitat), and cumulative impacts on birds and bats from existing wind projects; about relevant aspects of bat and bird migration ecology; about offshore wind development experience in Europe; and about preventing, minimizing, and mitigating avian and bat impacts; (3) review wind development guidelines developed by the USFWS and the Washington State Department of Fish and Wildlife; and (4) identify topics needing further research and to discuss what can be done to ensure that research is both credible and accessible. These Workshop Proceedings include detailed summaries of the presentations made and the discussions that followed. 12. Solar energy collector including a weightless balloon with sun tracking means Science.gov (United States) Hall, Frederick F. 1978-01-01 A solar energy collector having a weightless balloon, the balloon including a transparent polyvinylfluoride hemisphere reinforced with a mesh of ropes secured to its outside surface, and a laminated reflector hemisphere, the inner layer being clear and aluminized on its outside surface and the outer layer being opaque, the balloon being inflated with lighter-than-air gas. A heat collection probe extends into the balloon along the focus of reflection of the reflective hemisphere for conducting coolant into and out of the balloon. The probe is mounted on apparatus for keeping the probe aligned with the sun's path, the apparatus being founded in the earth for withstanding wind pressure on the balloon. The balloon is lashed to the probe by ropes adhered to the outer surface of the balloon for withstanding wind pressures of 100 miles per hour. Preferably, the coolant is liquid sodium-potassium eutectic alloy which will not normally freeze at night in the temperate zones, and when heated to 4,000.degree. R exerts a pressure of only a few atmospheres. 13. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery Science.gov (United States) Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A. 2016-09-01 Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. 14. Non-contact measurement of partial gas pressure and distribution of elemental composition using energy-resolved neutron imaging Science.gov (United States) Tremsin, A. S.; Losko, A. S.; Vogel, S. C.; Byler, D. D.; McClellan, K. J.; Bourke, M. A. M.; Vallerga, J. V. 2017-01-01 Neutron resonance absorption imaging is a non-destructive technique that can characterize the elemental composition of a sample by measuring nuclear resonances in the spectrum of a transmitted beam. Recent developments in pixelated time-of-flight imaging detectors coupled with pulsed neutron sources pose new opportunities for energy-resolved imaging. In this paper we demonstrate non-contact measurements of the partial pressure of xenon and krypton gases encapsulated in a steel pipe while simultaneously passing the neutron beam through high-Z materials. The configuration was chosen as a proof of principle demonstration of the potential to make non-destructive measurement of gas composition in nuclear fuel rods. The pressure measured from neutron transmission spectra (˜739 ± 98 kPa and ˜751 ± 154 kPa for two Xe resonances) is in relatively good agreement with the pressure value of ˜758 ± 21 kPa measured by a pressure gauge. This type of imaging has been performed previously for solids with a spatial resolution of ˜ 100 μm. In the present study it is demonstrated that the high penetration capability of epithermal neutrons enables quantitative mapping of gases encapsulate within high-Z materials such as steel, tungsten, urania and others. This technique may be beneficial for the non-destructive testing of bulk composition of objects (such as spent nuclear fuel assemblies and others) containing various elements opaque to other more conventional imaging techniques. The ability to image the gaseous substances concealed within solid materials also allows non-destructive leak testing of various containers and ultimately measurement of gas partial pressures with sub-mm spatial resolution. 15. Non-contact measurement of partial gas pressure and distribution of elemental composition using energy-resolved neutron imaging Directory of Open Access Journals (Sweden) A. S. Tremsin 2017-01-01 Full Text Available Neutron resonance absorption imaging is a non-destructive technique that can characterize the elemental composition of a sample by measuring nuclear resonances in the spectrum of a transmitted beam. Recent developments in pixelated time-of-flight imaging detectors coupled with pulsed neutron sources pose new opportunities for energy-resolved imaging. In this paper we demonstrate non-contact measurements of the partial pressure of xenon and krypton gases encapsulated in a steel pipe while simultaneously passing the neutron beam through high-Z materials. The configuration was chosen as a proof of principle demonstration of the potential to make non-destructive measurement of gas composition in nuclear fuel rods. The pressure measured from neutron transmission spectra (∼739 ± 98 kPa and ∼751 ± 154 kPa for two Xe resonances is in relatively good agreement with the pressure value of ∼758 ± 21 kPa measured by a pressure gauge. This type of imaging has been performed previously for solids with a spatial resolution of ∼ 100 μm. In the present study it is demonstrated that the high penetration capability of epithermal neutrons enables quantitative mapping of gases encapsulate within high-Z materials such as steel, tungsten, urania and others. This technique may be beneficial for the non-destructive testing of bulk composition of objects (such as spent nuclear fuel assemblies and others containing various elements opaque to other more conventional imaging techniques. The ability to image the gaseous substances concealed within solid materials also allows non-destructive leak testing of various containers and ultimately measurement of gas partial pressures with sub-mm spatial resolution. 16. 76 FR 67720 - Bishop Hill Energy III LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-11-02 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-164-000] Bishop Hill Energy III LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Bishop Hill... 17. 76 FR 67721 - Bishop Hill Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-11-02 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-161-000] Bishop Hill Energy LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Bishop Hill... 18. 77 FR 6109 - Bishop Hill Energy II LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-02-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-846-000] Bishop Hill Energy II LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Bishop Hill... 19. 76 FR 67721 - Bishop Hill Energy II LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-11-02 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-162-000] Bishop Hill Energy II LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Bishop Hill... 20. 76 FR 69267 - Stream Energy Columbia, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-11-08 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [ Docket No. ER12-224-000] Stream Energy Columbia, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Stream... 1. 77 FR 45349 - Stream Energy New York, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-07-31 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-2301-000] Stream Energy New York, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding, of Stream... 2. Fossil energy consumption and greenhouse gas emissions, including soil carbon effects, of producing agriculture and forestry feedstocks Science.gov (United States) Christina E. Canter; Zhangcai Qin; Hao Cai; Jennifer B. Dunn; Michael Wang; D. Andrew Scott 2017-01-01 The GHG emissions and fossil energy consumption associated with producing potential biomass sup­ply in the select BT16 scenarios include emissions and energy consumption from biomass production, harvest/collection, transport, and pre-processing activities to the reactor throat. Emissions associated with energy, fertilizers, and... 3. 78 FR 27219 - Osprey Energy Center, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2013-05-09 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-1406-000] Osprey Energy Center, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Osprey... 4. 77 FR 6109 - Mariposa Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request... Science.gov (United States) 2012-02-07 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-896-000] Mariposa Energy, LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Mariposa... 5. 77 FR 35373 - Duke Energy Dicks Creek, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-06-13 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-1951-000] Duke Energy Dicks Creek, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Dicks Creek, LLC's application for market-based rate authority, with an accompanying rate tariff, noting... 6. 76 FR 26283 - Blue Chip Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request... Science.gov (United States) 2011-05-06 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-3467-000] Blue Chip Energy LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Blue Chip... 7. 77 FR 28594 - Bethel Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-05-15 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-1739-000] Bethel Wind Energy LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Bethel Wind... 8. 77 FR 28593 - Rippey Wind Energy LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-05-15 ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-1740-000] Rippey Wind Energy LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of Rippey Wind... 9. Including Pressure Measurements in Supervision of Energy Efficiency of Wastewater Pump Systems DEFF Research Database (Denmark) Larsen, Torben; Arensman, Mareike; Nerup-Jensen, Ole 2016-01-01 energy). This article presents a method for a continuous supervision of the performance of both the pump and the pipeline in order to maintain the initial specific energy consumption as close as possible to the original value from when the system was commissioned. The method is based on pressure...... measurements only. The flow is determined indirectly from pressure fluctuations during pump run-up.... 10. ICT Enabling More Energy Efficient Processes, Including e-Invoicing as a Case NARCIS (Netherlands) Bomhof, F.W.; Hoorik, P.M. van; Hoeve, M.C. 2012-01-01 ICT has the potential to enable a low carbon economy, as pointed out by many studies. One example of the energy (and CO2) saving potential of ICT is illustrated in this chapter: how much energy (and emissions) can be saved if the invoicing process is redesigned? Although there is a net positive 11. Opportunities in the Fusion Energy Sciences Program [Includes Appendix C: Topical Areas Characterization Energy Technology Data Exchange (ETDEWEB) None 1999-06-01 Recent years have brought dramatic advances in the scientific understanding of fusion plasmas and in the generation of fusion power in the laboratory. Today, there is little doubt that fusion energy production is feasible. The challenge is to make fusion energy practical. As a result of the advances of the last few years, there are now exciting opportunities to optimize fusion systems so that an attractive new energy source will be available when it may be needed in the middle of the next century. The risk of conflicts arising from energy shortages and supply cutoffs, as well as the risk of severe environmental impacts from existing methods of energy production, are among the reasons to pursue these opportunities. 12. Eddy Resolving Global Ocean Prediction including Tides Science.gov (United States) 2013-09-30 oceanographic and acoustic soliton simulations in the Yellow Sea: a search for soliton -induced resonances. Mathematics and Computers in Simulation...Assimilation), and in forecast mode. Also to incorporate advances in dynamics and physics from the science community into the HYCOM established and maintained...validate the model in different regions and different regimes. Demonstrated advancements in HYCOM numerics and physics from all sources will be 13. Theoretical analysis of the time-resolved binary (e, 2e) binding energy spectra on three-body photodissociation of acetone at 195 nm Science.gov (United States) Yamazaki, M.; Nakayama, S.; Zhu, C. Y.; Takahashi, M. 2017-11-01 We report on theoretical progress in time-resolved (e, 2e) electron momentum spectroscopy of photodissociation dynamics of the deuterated acetone molecule at 195 nm. We have examined the predicted minimum energy reaction path to investigate whether associated (e, 2e) calculations meet the experimental results. A noticeable difference between the experiment and calculations has been found at around binding energy of 10 eV, suggesting that the observed difference may originate, at least partly, in ever-unconsidered non-minimum energy paths. 14. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation. Science.gov (United States) Vincent, Mark A; Hillier, Ian H 2014-08-25 The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model. 15. Predictive Energy Management Strategy Including Traffic Flow Data for Hybrid Electric Vehicles NARCIS (Netherlands) Bouwman, K.R.; Pham, T.H.; Wilkins, S.; Hofman, T. 2017-01-01 Within hybrid electric vehicles (HEVs) predictive energy management strategies (EMSs) have the potential to reduce the fuel consumption compared to conventional EMSs, where the drive cycle is unknown. Typically, predictive EMSs require a future vehicle speed profile prediction. However, when 16. ADEME energy transition scenarios. Summary including a macro-economic evaluation 2030 2050 International Nuclear Information System (INIS) 2014-05-01 ADEME, the French Environment and Energy Management Agency, is a public agency reporting to the Ministry of Ecology, Sustainable Development and Energy and the Ministry of Higher Education and Research. In 2012 the agency drew up a long-term scenario entitled 'ADEME Energy Transition Scenarios 2030-2050'. This document presents a summary of the report. The full version can be viewed online on the ADEME web site. With this work ADEME offers a proactive energy vision for all stakeholders - experts, the general public, decision-makers, etc. - focusing on two main areas of expertise: managing energy conservation and developing renewable energy production using proven or demonstration-phase technologies. These scenarios identify a possible pathway for the energy transition in France. They are based on two time horizons and two separate methodologies. One projection, applicable from the present day, seeks to maximise potential energy savings and renewable energy production in an ambitious but realistic manner, up to 2030. The second exercise is a normative scenario that targets a fourfold reduction in greenhouse gas emissions generated in France by 2050, compared to 1990 levels. The analysis presented in this document is primarily based on an exploration of different scenarios that allow for the achievement of ambitious energy and environmental targets under technically, economically and socially feasible conditions. This analysis is supplemented by a macro-economic analysis. These projections, particularly for 2030, do not rely on radical changes in lifestyle, lower comfort levels or hypothetical major technological breakthroughs. They show that by using technologies and organisational changes that are currently within our reach, we have the means to achieve these long-term goals. The scenarios are based on assumptions of significant growth, both economic (1.8% per year) and demographic (0.4% a year). The 2050 scenario shows that with sustained growth, a 17. Modification of energy-transfer processes in the cyanobacterium, Arthrospira platensis, to adapt to light conditions, probed by time-resolved fluorescence spectroscopy. Science.gov (United States) Akimoto, Seiji; Yokono, Makio; Aikawa, Shimpei; Kondo, Akihiko 2013-11-01 In cyanobacteria, the interactions among pigment-protein complexes are modified in response to changes in light conditions. In the present study, we analyzed excitation energy transfer from the phycobilisome and photosystem II to photosystem I in the cyanobacterium Arthrospira (Spirulina) platensis. The cells were grown under lights with different spectral profiles and under different light intensities, and the energy-transfer characteristics were evaluated using steady-state absorption, steady-state fluorescence, and picosecond time-resolved fluorescence spectroscopy techniques. The fluorescence rise and decay curves were analyzed by global analysis to obtain fluorescence decay-associated spectra. The direct energy transfer from the phycobilisome to photosystem I and energy transfer from photosystem II to photosystem I were modified depending on the light quality, light quantity, and cultivation period. However, the total amount of energy transferred to photosystem I remained constant under the different growth conditions. We discuss the differences in energy-transfer processes under different cultivation and light conditions. 18. Measurement of the dynamic charge response of materials using low-energy, momentum-resolved electron energy-loss spectroscopy (M-EELS Directory of Open Access Journals (Sweden) Sean Vig, Anshul Kogar, Matteo Mitrano, Ali A. Husain, Vivek Mishra, Melinda S. Rak, Luc Venema, Peter D. Johnson, Genda D. Gu, Eduardo Fradkin, Michael R. Norman, Peter Abbamonte 2017-10-01 Full Text Available One of the most fundamental properties of an interacting electron system is its frequency- and wave-vector-dependent density response function, $\\chi({\\bf q},\\omega$. The imaginary part, $\\chi''({\\bf q},\\omega$, defines the fundamental bosonic charge excitations of the system, exhibiting peaks wherever collective modes are present. $\\chi$ quantifies the electronic compressibility of a material, its response to external fields, its ability to screen charge, and its tendency to form charge density waves. Unfortunately, there has never been a fully momentum-resolved means to measure $\\chi({\\bf q},\\omega$ at the meV energy scale relevant to modern electronic materials. Here, we demonstrate a way to measure $\\chi$ with quantitative momentum resolution by applying alignment techniques from x-ray and neutron scattering to surface high-resolution electron energy-loss spectroscopy (HR-EELS. This approach, which we refer to here as M-EELS" allows direct measurement of $\\chi''({\\bf q},\\omega$ with meV resolution while controlling the momentum with an accuracy better than a percent of a typical Brillouin zone. We apply this technique to finite-{\\bf q} excitations in the optimally-doped high temperature superconductor, Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ (Bi2212, which exhibits several phonons potentially relevant to dispersion anomalies observed in ARPES and STM experiments. Our study defines a path to studying the long-sought collective charge modes in quantum materials at the meV scale and with full momentum control. 19. Theory of energy harvesting from heartbeat including the effects of pleural cavity and respiration Science.gov (United States) Zhang, Yangyang; Lu, Bingwei; Lü, Chaofeng; Feng, Xue 2017-11-01 Self-powered implantable devices with flexible energy harvesters are of significant interest due to their potential to solve the problem of limited battery life and surgical replacement. The flexible electronic devices made of piezoelectric materials have been employed to harvest energy from the motion of biological organs. Experimental measurements show that the output voltage of the device mounted on porcine left ventricle in chest closed environment decreases significantly compared to the case of chest open. A restricted-space deformation model is proposed to predict the impeding effect of pleural cavity, surrounding tissues, as well as respiration on the efficiency of energy harvesting from heartbeat using flexible piezoelectric devices. The analytical solution is verified by comparing theoretical predictions to experimental measurements. A simple scaling law is established to analyse the intrinsic correlations between the normalized output power and the combined system parameters, i.e. the normalized permitted space and normalized electrical load. The results may provide guidelines for optimization of in vivo energy harvesting from heartbeat or the motions of other biological organs using flexible piezoelectric energy harvesters. 20. Angle-resolved photoemission spectroscopy with 9-eV photon-energy pulses generated in a gas-filled hollow-core photonic crystal fiber Energy Technology Data Exchange (ETDEWEB) Bromberger, H., E-mail: Hubertus.Bromberger@mpsd.mpg.de; Liu, H.; Chávez-Cervantes, M.; Gierz, I. [Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg (Germany); Ermolov, A.; Belli, F.; Abdolvand, A.; Russell, P. St. J.; Travers, J. C. [Max Planck Institute for the Science of Light, Günther-Scharowsky-Str. 1, 91058 Erlangen (Germany); Calegari, F. [Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg (Germany); Institute for Photonics and Nanotechnologies, IFN-CNR, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Li, M. T.; Lin, C. T. [Max Planck Institute for Solid State Research, Heisenbergstr. 1, 70569 Stuttgart (Germany); Cavalleri, A. [Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg (Germany); Clarendon Laboratory, Department of Physics, University of Oxford, Parks Rd. Oxford OX1 3PU (United Kingdom) 2015-08-31 A recently developed source of ultraviolet radiation, based on optical soliton propagation in a gas-filled hollow-core photonic crystal fiber, is applied here to angle-resolved photoemission spectroscopy (ARPES). Near-infrared femtosecond pulses of only few μJ energy generate vacuum ultraviolet radiation between 5.5 and 9 eV inside the gas-filled fiber. These pulses are used to measure the band structure of the topological insulator Bi{sub 2}Se{sub 3} with a signal to noise ratio comparable to that obtained with high order harmonics from a gas jet. The two-order-of-magnitude gain in efficiency promises time-resolved ARPES measurements at repetition rates of hundreds of kHz or even MHz, with photon energies that cover the first Brillouin zone of most materials. 1. Energy-based fatigue model for shape memory alloys including thermomechanical coupling Science.gov (United States) Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong 2016-03-01 This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well. 2. Decision-maker's guide to wood fuel for small industrial energy users. Final report. [Includes glossary Energy Technology Data Exchange (ETDEWEB) Levi, M. P.; O& #x27; Grady, M. J. 1980-02-01 The technology and economics of various wood energy systems available to the small industrial and commercial energy user are considered. This book is designed to help a plant manager, engineer, or others in a decision-making role to become more familiar with wood fuel systems and make informed decisions about switching to wood as a fuel. The following subjects are discussed: wood combustion, pelletized wood, fuel storage, fuel handling and preparation, combustion equipment, retrofitting fossil-fueled boilers, cogeneration, pollution abatement, and economic considerations of wood fuel use. (MHR) 3. Excitation relaxation dynamics and energy transfer in fucoxanthin-chlorophyll a/c-protein complexes, probed by time-resolved fluorescence. Science.gov (United States) Akimoto, Seiji; Teshigahara, Ayaka; Yokono, Makio; Mimuro, Mamoru; Nagao, Ryo; Tomo, Tatsuya 2014-09-01 4. Improved morphed potentials for Ar-HBr including scaling to the experimentally determined dissociation energy. Science.gov (United States) Wang, Z; McIntosh, A L; McElmurry, B A; Walton, J R; Lucchese, R R; Bevan, J W 2005-09-15 A lead salt diode infrared laser spectrometer has been employed to investigate the rotational predissociation in Ar-HBr for transitions up to J' = 79 in the v(1) HBr stretching vibration of the complex using a slit jet and static gas phase. Line-shape analysis and modeling of the predissociation lifetimes have been used to determine a ground-state dissociation energy D(0) of 130(1) cm(-1). In addition, potential energy surfaces based on ab initio calculations are scaled, shifted, and dilated to generate three-dimensional morphed potentials for Ar-HBr that reproduce the measured value of D(0) and that have predictive capabilities for spectroscopic data with nearly experimental uncertainty. Such calculations also provide a basis for making a comprehensive comparison of the different morphed potentials generated using the methodologies applied. 5. 77 FR 30274 - Inupiat Energy Marketing, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-05-22 ... Energy Marketing, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Inupiat Energy Marketing, LLC's application for market-based rate authority, with an accompanying rate... protests and interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov . To... 6. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations Science.gov (United States) Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris 2017-07-01 While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost. 7. Proof of the positive energy theorem including the angular momentum contribution International Nuclear Information System (INIS) Zhang Jingfei; Chee, G.Y.; Guo Yongxin 2005-01-01 A proof of the positive energy theorem of general relativity is given. In this proof the gravitational Lagrangian is identified with that of Lau and is equivalent to the teleparallel Lagrangian modulo, a boundary term. The approach adopted in this proof uses the two-spinor method and the extended Witten identities and then combines the Brown-York and the Nester-Witten approaches. At the same time the proof is extended to the case involving the contribution of angular momentum by choosing a special shift vector 8. Automatic generation control with thyristor controlled series compensator including superconducting magnetic energy storage units Directory of Open Access Journals (Sweden) 2014-09-01 Full Text Available In the present work, an attempt has been made to understand the dynamic performance of Automatic Generation Control (AGC of multi-area multi-units thermal–thermal power system with the consideration of Reheat turbine, Generation Rate Constraint (GRC and Time delay. Initially, the gains of the fuzzy PID controller are optimized using Differential Evolution (DE algorithm. The superiority of DE is demonstrated by comparing the results with Genetic Algorithm (GA. After that performance of Thyristor Controlled Series Compensator (TCSC has been investigated. Further, a TCSC is placed in the tie-line and Superconducting Magnetic Energy Storage (SMES units are considered in both areas. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions from their nominal values. It is observed that the optimum gains of the proposed controller need not be reset even if the system is subjected to wide variation in loading condition and system parameters. 9. Frequency participation by using virtual inertia in wind turbines including energy storage DEFF Research Database (Denmark) Xiao, Zhao xia; Huang, Yu; Guerrero, Josep M. 2017-01-01 (WT) and battery unit (BU). A central controller forecasts wind speed and determines system operation states to be sent to the local controllers. These local controllers include MPPT, virtual inertia, and pitch control for the WT; and power control loops for the BU. The proposed approach achieve... 10. Optimization of energy plants including water/lithium bromide absorption chillers Energy Technology Data Exchange (ETDEWEB) Bruno, J.C.; Castells, F. [Universitat Rovira i Virgili, Dept. d' Enginyeria Quimica, Tarragona (Spain); Miquel, J. [Universitat Politecnica de Catalunya, Dept. de Mecanica de Fluids, Barcelona (Spain) 2000-07-01 In this paper a methodology for the optimal integration of water/lithium bromide absorption chillers in combined heat and power plants is proposed. This method is based on the economic optimisation of an energy plant that interacts with a refrigeration cycle, by using a successive linear programming technique (SLP). The aim of this paper is to study the viability of the integration of already technologically available absorption chillers in CHP plants. The results of this alternative are compared with the results obtained using the conventional way of producing chilled water, that is, using mechanical vapour compression chillers in order to select the best refrigeration cycle alternative for a given refrigeration demand. This approach is implemented in the computer program XV, and tested using the data obtained in the water/LiBr absorption chiller of Bayer in Tarragona (Catalonia, Spain). The results clearly show that absorption chillers are not only a good option when low-cost process heat is available, but also when a cogeneration system is presented. In this latter case, the absorption chiller acts as a bottoming cycle by using steam generated in the heat recovery boiler. In this way, the cogeneration size can be increased producing higher benefits than those obtained with the use of compression chillers. (Author) 11. Reconstruction of 6 MV photon spectra from measured transmission including maximum energy estimation. Science.gov (United States) Baker, C R; Peck, K K 1997-11-01 Photon spectra from a nominally 6 MV beam under standard clinical conditions and at higher and lower beam qualities have been derived from narrow-beam transmission measurements using a previously published three-parameter reconstruction model. Estimates of the maximum photon energy present in each spectrum were derived using a reduced number of model parameters. An estimate of the maximum contribution of background, or room, scatter to transmission measurements has been made for this study and is shown to be negligible in terms of the quality index and percentage depth-dose of the derived spectra. Percentage depth-dose data for standard beam conditions derived from the reconstructed spectrum were found to agree with direct measurements to within approximately 1% for depths of up to 25 cm in water. Quality indices expressed in terms of TPR10(20) for all spectra were found to agree with directly measured values to within 1%. The experimental procedure and reconstruction model are therefore shown to produce photon spectra whose derived quality indices and percentage depth-dose values agree with direct measurement to within expected experimental uncertainty. 12. Computation of binding energies including their enthalpy and entropy components for protein-ligand complexes using support vector machines. Science.gov (United States) Koppisetty, Chaitanya A K; Frank, Martin; Kemp, Graham J L; Nyholm, Per-Georg 2013-10-28 Computing binding energies of protein-ligand complexes including their enthalpy and entropy terms by means of computational methods is an appealing approach for selecting initial hits and for further optimization in early stages of drug discovery. Despite the importance, computational predictions of thermodynamic components have evaded attention and reasonable solutions. In this study, support vector machines are used for developing scoring functions to compute binding energies and their enthalpy and entropy components of protein-ligand complexes. The binding energies computed from our newly derived scoring functions have better Pearson's correlation coefficients with experimental data than previously reported scoring functions in benchmarks for protein-ligand complexes from the PDBBind database. The protein-ligand complexes with binding energies dominated by enthalpy or entropy term could be qualitatively classified by the newly derived scoring functions with high accuracy. Furthermore, it is found that the inclusion of comprehensive descriptors based on ligand properties in the scoring functions improved the accuracy of classification as well as the prediction of binding energies including their thermodynamic components. The prediction of binding energies including the enthalpy and entropy components using the support vector machine based scoring functions should be of value in the drug discovery process. 13. Time Resolved Spectroscopic Studies on a Novel Synthesized Photo-Switchable Organic Dyad and Its Nanocomposite Form in Order to Develop Light Energy Conversion Devices. Science.gov (United States) Dutta Pal, Gopa; Paul, Abhijit; Yadav, Somnath; Bardhan, Munmun; De, Asish; Chowdhury, Joydeep; Jana, Aindrila; Ganguly, Tapan 2015-08-01 UV-vis absorption, steady state and time resolved spectroscopic investigations in pico and nanosecond time domain were made in the different environments on a novel synthesized dyad, 3-(2-methoxynaphthalen-1-yl)-1-(4-methoxyphenyl)prop-2-en-1-one (MNTMA) in its pristine form and when combined with gold (Au) nanoparticles i.e., in its nanocomposite structure. Both steady state and time resolved measurements coupled with the DFT calculations performed by using Gaussian 03 suit of software operated in the linux operating system show that though the dyad exhibits mainly the folded conformation in the ground state but on photoexcitation the nanocomposite form of dyad prefers to be in elongated structure in the excited state indicating its photoswitchable nature. Due to the predominancy of elongated isomeric form of the dyad in the excited state in presence of Au Nps, it appears that the dyad MNTMA may behave as a good light energy converter specially in its nanocomposite form. As larger charge separation rate (kcs ~ 4 x 10(8) s-1) is found relative to the rate associated with the energy wasting charge recombination processes (kcR ~ 3 x 10(5) s-1) in the nanocomposite form of the dyad, it demonstrates the suitability of constructing the efficient light energy conversion devices with Au-dyad hybrid nanomaterials. 14. Digital fast pulse shape and height analysis on cadmium-zinc-telluride arrays for high-flux energy-resolved X-ray imaging. Science.gov (United States) Abbene, Leonardo; Principato, Fabio; Gerardi, Gaetano; Bettelli, Manuele; Seller, Paul; Veale, Matthew C; Zambelli, Nicola; Benassi, Giacomo; Zappettini, Andrea 2018-01-01 Cadmium-zinc-telluride (CZT) arrays with photon-counting and energy-resolving capabilities are widely proposed for next-generation X-ray imaging systems. This work presents the performance of a 2 mm-thick CZT pixel detector, with pixel pitches of 500 and 250 µm, dc coupled to a fast and low-noise ASIC (PIXIE ASIC), characterized only by the preamplifier stage. A custom 16-channel digital readout electronics was used, able to digitize and process continuously the signals from each output ASIC channel. The digital system performs on-line fast pulse shape and height analysis, with a low dead-time and reasonable energy resolution at both low and high fluxes. The spectroscopic response of the system to photon energies below ( 109 Cd source) and above ( 241 Am source) the K-shell absorption energy of the CZT material was investigated, with particular attention to the mitigation of charge sharing and pile-up. The detector allows high bias voltage operation (>5000 V cm -1 ) and good energy resolution at moderate cooling (3.5% and 5% FWHM at 59.5 keV for the 500 and 250 µm arrays, respectively) by using fast pulse shaping with a low dead-time (300 ns). Charge-sharing investigations were performed using a fine time coincidence analysis (TCA), with very short coincidence time windows up to 10 ns. For the 500 µm pitch array (250 µm pitch array), sharing percentages of 36% (52%) and 60% (82%) at 22.1 and 59.5 keV, respectively, were measured. The potential of the pulse shape analysis technique for charge-sharing detection for corner/border pixels and at high rate conditions (250 kcps pixel -1 ), where the TCA fails, is also shown. Measurements demonstrated that significant amounts of charge are lost for interactions occurring in the volume of the inter-pixel gap. This charge loss must be accounted for in the correction of shared events. These activities are within the framework of an international collaboration on the development of energy-resolved 15. Decay time shortening of fluorescence from donor-acceptor pair proteins using ultrafast time-resolved fluorescence resonance energy transfer spectroscopy International Nuclear Information System (INIS) Baba, Motoyoshi; Suzuki, Masayuki; Ganeev, Rashid A.; Kuroda, Hiroto; Ozaki, Tsuneyuki; Hamakubo, Takao; Masuda, Kazuyuki; Hayashi, Masahiro; Sakihama, Toshiko; Kodama, Tatsuhiko; Kozasa, Tohru 2007-01-01 We improved an ultrafast time-resolved fluorescence resonance energy transfer (FRET) spectroscopy system and measured directly the decrease in the fluorescence decay time of the FRET signal, without any entanglement of components in the picosecond time scale from the donor-acceptor protein pairs (such as cameleon protein for calcium ion indicator, and ligand-activated GRIN-Go proteins pair). The drastic decrease in lifetime of the donor protein fluorescence under the FRET condition (e.g. a 47.8% decrease for a GRIN-Go protein pair) proves the deformation dynamics between donor and acceptor fluorescent proteins in an activated state of a mixed donor-acceptor protein pair. This study is the first clear evidence of physical contact of the GRIN-Go proteins pair using time-resolved FRET system. G protein-coupled receptors (GPCRs) are the most important protein family for the recognition of many chemical substances at the cell surface. They are the targets of many drugs. Simultaneously, we were able to observe the time-resolved spectra of luminous proteins at the initial stage under the FRET condition, within 10 ns from excitation. This new FRET system allows us to trace the dynamics of the interaction between proteins at the ligand-induced activated state, molecular structure change and combination or dissociation. It will be a key technology for the development of protein chip technology 16. Time- and energy resolved photoemission electron microscopy-imaging of photoelectron time-of-flight analysis by means of pulsed excitations International Nuclear Information System (INIS) Oelsner, Andreas; Rohmer, Martin; Schneider, Christian; Bayer, Daniela; Schoenhense, Gerd; Aeschlimann, Martin 2010-01-01 The present work enlightens the developments in time- and energy resolved photoemission electron microscopy over the past few years. We describe basic principles of the technique and demonstrate different applications. An energy- and time-filtering photoemission electron microscopy (PEEM) for real-time spectroscopic imaging can be realized either by a retarding field or hemispherical energy analyzer or by using time-of-flight optics with a delay line detector. The latter method has the advantage of no data loss at all as all randomly incoming particles are measured not only by position but also by time. This is of particular interest for pump-probe experiments in the femtosecond and attosecond time scale where space charge processes drastically limit the maximum number of photoemitted electrons per laser pulse. This work focuses particularly on time-of-flight analysis using a novel delay line detector. Time and energy resolved PEEM instruments with delay line detectors enable 4D imaging (x, y, Δt, E Kin ) on a true counting basis. This allows a broad range of applications from real-time observation of dynamic phenomena at surfaces to fs time-of-flight spectro-microscopy and even aberration correction. By now, these time-of-flight analysis instruments achieve intrinsic time resolutions of 108 ps absolute and 13.5 ps relative. Very high permanent measurement speeds of more than 4 million events per second in random detection regimes have been realized using a standard USB2.0 interface. By means of this performance, the time-resolved PEEM technique enables to display evolutions of spatially resolved (<25 nm) and temporal sliced images life on any modern computer. The method allows dynamics investigations of variable electrical, magnetic, and optical near fields at surfaces and great prospects in dynamical adaptive photoelectron optics. For dynamical processes in the ps time scale such as magnetic domain wall movements, the time resolution of the delay line detectors 17. Composting of soils/sediments and sludges containing toxic organics including high energy explosives. Final report Energy Technology Data Exchange (ETDEWEB) Doyle, R.C.; Kitchens, J.F. 1993-07-01 Laboratory and pilot-scale experimentation were conducted to evaluate composting as an on-site treatment technology to remediate soils contaminated with hazardous waste at DOEs PANTEX Plant. Suspected contaminated sites within the PANTEX Plant were sampled and analyzed for explosives, other organics, and inorganic wastes. Soils in drainage ditches and playas at PANTEX Plant were found to be contaminated with low levels of explosives (including RDX, HMX, PETN and TATB). Additional sites previously used for solvent disposal were heavily contaminated with solvents and transformation products of the solvent, as well as explosives and by-products of explosives. Laboratory studies were conducted using {sup 14}C-labeled explosives and {sup 14}C-labeled diacetone alcohol contaminated soil loaded into horse manure/hay composts at three rates: 20, 30, and 40%(W/W). The composts were incubated for six weeks at approximately 60{degree}C with continuous aeration. All explosives degraded rapidly and were reduced to below detection limits within 3 weeks in the laboratory studies. {sup 14}C-degradates from {sup 14}C-RDX, {sup 14}C-HMX and {sup 14}C-TATB were largely limited to {sup 14}CO{sub 2} and unextracted residue in the compost. Volatile and non-volatile {sup 14}C-degradates were found to result from {sup 14}C-PETN breakdown, but these compounds were not identified. {sup 14}C-diacetone alcohol concentrations were significantly reduced during composting. However, most of the radioactivity was volatilized from the compost as non-{sup 14}CO{sub 2} degradates or as {sup 14}C-diacetone alcohol. Pilot scale composts loaded with explosives contaminated soil at 30% (W/W) with intermittent aeration were monitored over six weeks. Data from the pilot-scale study generally was in agreement with the laboratory studies. However, the {sup 14}C-labeled TATB degraded much faster than the unlabeled TATB. Some formulations of TATB may be more resistant to composting activity than others. 18. EMPLOY: Step-by-step guidelines for calculating employment effects of renewable energy investments [including annex 2 Energy Technology Data Exchange (ETDEWEB) Breitschopf, Barbara [Fraunhofer Inst. for Systems and Innovation Research (Germany); Nathani, Carsten [Ruetter and Partner Socioeconomic Research and Consulting (Switzerland); Resch, Gustav [Vienna Univ. of Technology, Energy Economics Group (EEG) (Austria 2012-07-15 The EMPLOY project aimed to help achieve the IEA-RETD’s objective to 'empower policy makers and energy market actors through the provision of information, tools and resources' by underlining the economic and industrial impacts of renewable energy technology deployment and providing reliable methodological approaches for employment – similar to those available for the incumbent energy technologies. The EMPLOY project resulted in a comprehensive set of methodological guidelines for estimating the employment impacts of renewable energy deployment in a coherent, uniform and systematic way. Guidelines were prepared for four different methodological approaches. In the introduction section of the guidelines policy makers are guided in their choice for the most suited approach, depending on the policy questions to be answered, the data availability and budget. The guidelines were tested for the IEA-RETD member state countries and Tunisia. The results of these calculations are included in the annex to the guidelines. 19. Feasibility study on temporal-resolved diffraction of high-energy electrons produced in femtosecond laser-plasmas CERN Document Server Zhang Jun; Cang Yu; Chen Qing; Peng Lian Mao; Wang Huai Bin; Zhong Jia Yong 2002-01-01 The high-energy electrons can be produced in the interaction between intense ultra-short laser pulses and Al targets. The diffraction may take place when high-energy electrons pass through an Al single crystal. Feasibility is studied using such diffraction as a method to analyze the structures of crystals 20. Comparison of the rate constants for energy transfer in the light-harvesting protein, C-phycocyanin, calculated from Foersters theory and experimentally measured by time-resolved fluorescence spectroscopy Energy Technology Data Exchange (ETDEWEB) Debreczeny, Martin Paul [Univ. of California, Berkeley, CA (United States) 1994-05-01 We have measured and assigned rate constants for energy transfer between chromophores in the light-harvesting protein C-phycocyanin (PC), in the monomeric and trimeric aggregation states, isolated from Synechococcus sp. PCC 7002. In order to compare the measured rate constants with those predicted by Fdrsters theory of inductive resonance in the weak coupling limit, we have experimentally resolved several properties of the three chromophore types ({beta}{sub 155} {alpha}{sub 84}, {beta}{sub 84}) found in PC monomers, including absorption and fluorescence spectra, extinction coefficients, fluorescence quantum yields, and fluorescence lifetimes. The cpcB/C155S mutant, whose PC is missing the {beta}{sub 155} chromophore, was, useful in effecting the resolution of the chromophore properties and in assigning the experimentally observed rate constants for energy transfer to specific pathways. 1. Brazilian national energy balance 2007. Calendar year 2006[Includes executive summary 2007]; Balanco energetico nacional 2007. Ano base 2006 Energy Technology Data Exchange (ETDEWEB) NONE 2007-07-01 This document reports the activities of the Ministry of Mine and Energy, during the calendar year 2006 as follows: energy analysis and aggregated data; supply and demand of energy according to source; energy consumption according to sector; energy external trading; transformation center balance; energy resources and reserves; energy and social economics; state energy data; installed capacity; energy world data. 2. ChromAIX: A high-rate energy-resolving photon-counting ASIC for Spectral Computed Tomography NARCIS (Netherlands) Steadman, R.; Herrmann, C.; Mülhens, O. 2011-01-01 X-ray attenuation properties of matter (i.e. human body in medicalComputed Tomography) are energy and material dependent. This dependency is largely neglected in conventional CT techniques, which require the introduction of correction algorithms in order to prevent image artefacts. The 3. Resolving Past Liabilities for Future Reduction in Greenhouse Gases; Nuclear Energy and the Outstanding Federal Liability of Spent Nuclear Fuel Science.gov (United States) Donohue, Jay This thesis will: (1) examine the current state of nuclear power in the U.S.; (2) provide a comparison of nuclear power to both existing alternative/renewable sources of energy as well as fossil fuels; (3) dissect Standard Contracts created pursuant to the National Waste Policy Act (NWPA), Congress' attempt to find a solution for Spent Nuclear Fuel (SNF), and the designation of Yucca Mountain as a repository; (4) the anticipated failure of Yucca Mountain; (5) explore WIPP as well as attempts to build a facility on Native American land in Utah; (6) examine reprocessing as a solution for SNF used by France and Japan; and, finally, (7) propose a solution to reduce GHG's by developing new nuclear energy plants with financial support from the U.S. government and a solution to build a storage facility for SNF through the sitting of a repository based on a "bottom-up" cooperative federalism approach. 4. Time and space-resolved energy flux measurements in the divertor of the ASDEX tokamak by computerized infrared thermography International Nuclear Information System (INIS) Mueller, E.R.; Steinmetz, K.; Bein, B.K. 1984-06-01 A new, fully computerized and automatic thermographic system has been developed. Its two central components are an AGA THV 780 infrared camera and a PDP-11/34 computer. A combined analytical-numerical method of solving the 1-dimensional heat diffusion equation for a solid of finite thickness bounded by two parallel planes was developed. In high-density (anti nsub(e) = 8 x 10 13 cm -3 ) neutral-beam-heated (L-mode) divertor discharges in ASDEX, the power deposition on the neutralizer plates is reduced to about 10-15% of the total heating power, owing to the inelastic scattering of the divertor plasma from a neutral gas target. Between 30% and 40% of the power is missing in the global balance. The power flow inside the divertor chambers is restricted to an approximately 1-cm-thick plasma scrape-off layer. This width depends only weakly on the density and heating power. During H-phases free of Edge Localized Mode (ELM) activity the energy flow into the divertor is blocked. During H-phases with ELM activity the energy is expelled into the divertor in very short intense pulses (several MW for about one hundred μs). Sawtooth events are able to transport significant amounts of energy from the plasma core to the peripheral zones and the scrape-off layer, and they are frequently correlated with transitions from the L to the H mode. (orig./AH) 5. Probing long-range structural order in SnPc/Ag(111) by umklapp process assisted low-energy angle-resolved photoelectron spectroscopy Science.gov (United States) Jauernik, Stephan; Hein, Petra; Gurgel, Max; Falke, Julian; Bauer, Michael 2018-03-01 Laser-based angle-resolved photoelectron spectroscopy is performed on tin-phthalocyanine (SnPc) adsorbed on silver Ag(111). Upon adsorption of SnPc, strongly dispersing bands are observed which are identified as secondary Mahan cones formed by surface umklapp processes acting on photoelectrons from the silver substrate as they transit through the ordered adsorbate layer. We show that the photoemission data carry quantitative structural information on the adsorbate layer similar to what can be obtained from a conventional low-energy electron diffraction (LEED) study. More specifically, we compare photoemission data and LEED data probing an incommensurate-to-commensurate structural phase transition of the adsorbate layer. Based on our results we propose that Mahan-cone spectroscopy operated in a pump-probe configuration can be used in the future to probe structural dynamics at surfaces with a temporal resolution in the sub-100-fs regime. 6. Effect of crystallinity on UV degradability of poly[methyl(phenylsilane] by energy-resolved electrochemical impedance spectroscopy Directory of Open Access Journals (Sweden) F. Schauer 2017-05-01 Full Text Available Low stability and degradability of polymers by ambient air, UV irradiation or charge transport are major problems of molecular electronics devices. Recent research tentatively suggests that the presence of a crystalline phase may increase polymer stability due to an intensive energy trapping in the ordered phase. Using the UV degradability, we demonstrate this effect on an archetypal model σ bonded polymer - poly[methyl(phenylsilane] (PMPSi - with partially crystalline and amorphous-like layers. UV degradation with 345 nm, derived from the branching state generation rate, was inversely proportional to the crystalline phase content, changing from 4.8x1011 s-1 (partially crystalline phase to 1.8x1013 s-1 (amorphous-like phase. A model is proposed where crystallites formed by molecular packing act as effective excitation energy traps with a suppressed nonradiative recombination improving thus PMPSi film stability. The molecular packing and higher crystalline phase proportion may be a general approach for stability and degradability improvement of polymers in molecular electronics. 7. Resolving the impasse in American energy policy. The case for a transformational R and D strategy at the U.S. Department of Energy Energy Technology Data Exchange (ETDEWEB) Sovacool, Benjamin K. [National University of Singapore, Lee Kuan Yew School of Public Policy Centre on Asia and Globalisation, 469C Bukit Timah Road, Singapore 259772 (Singapore) 2009-02-15 From its inception in 1977, the U.S. Department of Energy (DOE) has been responsible for maintaining the nation's nuclear stockpile, leading the country in terms of basic research, setting national energy goals, and managing thousands of individual programs. Despite these gains, however, the DOE research and development (R and D) model does not appear to offer the nation an optimal strategy for assessing long-term energy challenges. American energy policy continues to face constraints related to three I's: inconsistency, incrementalism, and inadequacy. An overly rigid management structure and loss of mission within the DOE continues to plague its programs and create inconsistencies in terms of a national energy policy. Various layers of stove-piping within and between the DOE and national laboratories continue to fracture collaboration between institutions and engender only slow, incremental progress on energy problems. And funding for energy research and development continues to remain inadequate, compromising the country's ability to address energy challenges. To address these concerns, an R and D organization dedicated to transformative, creative research is proposed. (author) 8. Energy transfer in Anabaena variabilis filaments adapted to nitrogen-depleted and nitrogen-enriched conditions studied by time-resolved fluorescence. Science.gov (United States) Onishi, Aya; Aikawa, Shimpei; Kondo, Akihiko; Akimoto, Seiji 2017-09-01 Nitrogen is among the most important nutritious elements for photosynthetic organisms such as plants, algae, and cyanobacteria. Therefore, nitrogen depletion severely compromises the growth, development, and photosynthesis of these organisms. To preserve their integrity under nitrogen-depleted conditions, filamentous nitrogen-fixing cyanobacteria reduce atmospheric nitrogen to ammonia, and self-adapt by regulating their light-harvesting and excitation energy-transfer processes. To investigate the changes in the primary processes of photosynthesis, we measured the steady-state absorption and fluorescence spectra and time-resolved fluorescence spectra (TRFS) of whole filaments of the nitrogen-fixing cyanobacterium Anabaena variabilis at 77 K. The filaments were grown in standard and nitrogen-free media for 6 months. The TRFS were measured with a picosecond time-correlated single photon counting system. Despite the phycobilisome degradation, the energy-transfer paths within phycobilisome and from phycobilisome to both photosystems were maintained. However, the energy transfer from photosystem II to photosystem I was suppressed and a specific red chlorophyll band appeared under the nitrogen-depleted condition. 9. Reconstruction of Time-Resolved Neutron Energy Spectra in Z-Pinch Experiments Using Time-of-flight Method International Nuclear Information System (INIS) Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J. 2009-01-01 We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction. 10. Simple energy balance model resolving the seasons and the continents - Application to the astronomical theory of the ice ages Science.gov (United States) North, G. R.; Short, D. A.; Mengel, J. G. 1983-01-01 An analysis is undertaken of the properties of a one-level seasonal energy balance climate model having explicit, two-dimensional land-sea geography, where land and sea surfaces are strictly distinguished by the local thermal inertia employed and transport is governed by a smooth, latitude-dependent diffusion mechanism. Solutions of the seasonal cycle for the cases of both ice feedback exclusion and inclusion yield good agreements with real data, using minimal turning of the adjustable parameters. Discontinuous icecap growth is noted for both a solar constant that is lower by a few percent and a change of orbital elements to favor cool Northern Hemisphere summers. This discontinuous sensitivity is discussed in the context of the Milankovitch theory of the ice ages, and the associated branch structure is shown to be analogous to the 'small ice cap' instability of simpler models. 11. Performance analysis on borehole energy storage system including utilization of solar thermal and photovoltaic energies; Taiyonetsu hikari riyo wo fukumu borehole energy chozo system no kenkyu Energy Technology Data Exchange (ETDEWEB) Saito, T. [Tohoku University, Sendai (Japan); Yamaguchi, A. [Matsushita Electric Co. Ltd., Osaka (Japan) 1996-10-27 A permanent borehole energy storage system utilizing solar energy and waste heat from coolers is simulated, to be used as an air conditioning system for super-tall buildings. A 100m-long pipe is buried vertically into the ground, and a heat medium is caused to circulate in the pipe for the exchange of heat with the soil. Thirty borehole units are used, each measuring 9m{times}9m (with the pipe pitch being 3m). Solar cells occupying half of the wall surface facing south and solar collectors installed on the roof supply electric power and heat for cooling and warming. Heat in the ground is transferred mainly by conduction but also is carried by water and gas in movement. So, an analysis is carried out using an equation in which heat and water move at the same time. Because waste heat from cooling and warming systems is accumulated in the ground and none is discharged into the air, big cities will be protected from warming (from developing heat islands). As compared with the conventional boiler-aided air conditioning system, a hybrid borehole system incorporating solar collectors and solar cells will bring about an 80% reduction in CO2 emission and annual energy consumption. 7 refs., 3 figs., 4 tabs. 12. Long term variability of Cygnus X-1. VI. Energy-resolved X-ray variability 1999-2011 NARCIS (Netherlands) Grinberg, V.; Pottschmidt, K.; Böck, M.; Schmid, C.; Nowak, M.A.; Uttley, P.; Tomsick, J.A.; Rodriguez, J.; Hell, N.; Markowitz, A.; Bodaghee, A.; Cadolle Bel, M.; Rothschild, R.E.; Wilms, J. 2014-01-01 We present the most extensive analysis of Fourier-based X-ray timing properties of the black hole binary Cygnus X-1 to date, based on 12 years of bi-weekly monitoring with RXTE from 1999 to 2011. Our aim is a comprehensive study of timing behavior across all spectral states, including the elusive 13. A distribution-based method to resolve single-molecule Förster resonance energy transfer observations. Science.gov (United States) Backović, Mihailo; Price, E Shane; Johnson, Carey K; Ralston, John P 2011-04-14 We introduce a new approach to analyze single-molecule Förster resonance energy transfer (FRET) data. The method recognizes that FRET efficiencies assumed by traditional ensemble methods are unobservable for single molecules. We propose instead a method to predict distributions of FRET parameters obtained directly from the data. Distributions of FRET rates, given the data, are precisely defined using Bayesian methods and increase the information derived from the data. Benchmark comparisons find that the response time of the new method outperforms traditional methods of averaging. Our approach makes no assumption about the number or distribution of underlying FRET states. The new method also yields information about joint parameter distributions going beyond the standard framework of FRET analysis. For example, the running distribution of FRET means contains more information than any conceivable single measure of FRET efficiency. The method is tested against simulated data and then applied to a pilot-study sample of calmodulin molecules immobilized in lipid vesicles, revealing evidence for multiple dynamical states. 14. The MC-DFT approach including the SCS-MP2 energies to the new Minnesota-type functionals. Science.gov (United States) Liu, Po-Chun; Hu, Wei-Ping 2014-08-05 We have applied the multicoefficient density functional theory (MC-DFT) to four recent Minnesota functionals, including M06-2X, M08-HX, M11, and MN12-SX on the performance of thermochemical kinetics. The results indicated that the accuracy can be improved significantly using more than one basis set. We further included the SCS-MP2 energies into MC-DFT, and the resulting mean unsigned errors (MUEs) decreased by approximately 0.3 kcal/mol for the most accurate basis set combinations. The M06-2X functional with the simple [6-311+G(d,p)/6-311+G(2d,2p)] combination gave the best performance/cost ratios for the MC-DFT and MC-SCS-MP2|MC-DFT methods with MUE of 1.58 and 1.22 kcal/mol, respectively. Copyright © 2014 Wiley Periodicals, Inc. 15. Investigation of the quaternary structure of an ABC transporter in living cells using spectrally resolved resonance energy transfer Science.gov (United States) Singh, Deo Raj Forster resonance energy transfer (FRET) has become an important tool to study proteins inside living cells. It has been used to explore membrane protein folding and dynamics, determine stoichiometry and geometry of protein complexes, and measure the distance between two molecules. In this dissertation, we use a method based on FRET and optical micro-spectroscopy (OptiMiS) technology, developed in our lab, to probe the structure of dynamic (as opposed to static) protein complexes in living cells. We use this method to determine the association stoichiometry and quaternary structure of an ABC transporter in living cells. Specifically, the transporter we investigate originates from the pathogen Pseudomonas aeruginosa, which is a Gram-negative bacterium with several virulence factors, lipopolysaccharides being one of them. This pathogen coexpresses two unique forms of lipopolysaccharides on its surface, the A- and B-bands. The A-band polysaccharides, synthesized in the cytoplasm, are translocated into the periplasm through an ATP-binding-cassette (ABC) transporter consisting of a transmembranar protein, Wzm, and a nucleotide-binding protein, Wzt. In P. aeruginosa, all of the biochemical studies of A-band LPS are concentrated on the stages of the synthesis and ligation of polysaccharides (PSs), leaving the export stage involving ABC transporter unexplored. The mode of PS export through ABC transporters is still unknown. This difficulty is due to the lack of information about sub-unit composition and structure of this bi-component ABC transporter. Using the FRET-OptiMiS combination method developed by our lab, we found that Wzt forms a rhombus-shaped homo-tetramer which becomes a square upon co-expression with Wzm, and that Wzm forms a square-shaped homo-tetramer both in the presence and absence of Wzt. Based on these results, we propose a structural model for the double-tetramer complex formed by the bi-component ABC transporter in living cells. An understanding of the 16. Assessment of commercially available energy-efficient room air conditioners including models with low global warming potential (GWP) refrigerants Energy Technology Data Exchange (ETDEWEB) Shah, N. K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Park, W. Y. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gerke, B. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) 2017-08-30 Improving the energy efficiency of room air conditioners (RACs) while transitioning to low global-warming-potential (GWP) refrigerants will be a critical step toward reducing the energy, peak load, and emissions impacts of RACs while keeping costs low. Previous research quantified the benefits of leapfrogging to high efficiency in tandem with the transition to low-GWP refrigerants for RACs (Shah et al., 2015) and identified opportunities for initial action to coordinate energy efficiency with refrigerant transition in economies constituting about 65% of the global RAC market (Shah et al., 2017). This report describes further research performed to identify the best-performing (i.e., most efficient and low-GWP-refrigerant using) RACs on the market, to support an understanding of the best available technology (BAT). Understanding BAT can help support market-transformation programs for high-efficiency and low-GWP equipment such as minimum energy performance standards (MEPS), labeling, procurement, and incentive programs. We studied RACs available in six economies—China, Europe, India, Japan, South Korea, and the United States—that together account for about 70% of global RAC demand, as well as other emerging economies. The following are our key findings: • Highly efficient RACs using low-GWP refrigerants, e.g., HFC-32 (R-32) and HC-290 (R-290), are commercially available today at prices comparable to similar RACs using high-GWP HCFC-22 (R-22) or HFC-410A (R-410A). • High efficiency is typically a feature of high-end products. However, highly efficient, cost-competitive (less than 1,000 or 1,500 U.S. dollars in retail price, depending on size) RACs are available. • Where R-22 is being phased out, high GWP R-410A still dominates RAC sales in most mature markets except Japan, where R-32 dominates. • In all of the economies studied except Japan, only a few models are energy efficient and use low-GWP refrigerants. For example, in Europe, India, and Indonesia 17. MOCCA: A 4k-Pixel Molecule Camera for the Position- and Energy-Resolving Detection of Neutral Molecule Fragments at CSR Science.gov (United States) Gamer, L.; Schulz, D.; Enss, C.; Fleischmann, A.; Gastaldo, L.; Kempf, S.; Krantz, C.; Novotný, O.; Schwalm, D.; Wolf, A. 2016-08-01 We present the design of MOCCA, a large-area particle detector that is developed for the position- and energy-resolving detection of neutral molecule fragments produced in electron-ion interactions at the Cryogenic Storage Ring at the Max Planck Institute for Nuclear Physics in Heidelberg. The detector is based on metallic magnetic calorimeters and consists of 4096 particle absorbers covering a total detection area of 44.8 mathrm {mm} × 44.8 mathrm {mm}. Groups of four absorbers are thermally coupled to a common paramagnetic temperature sensor where the strength of the thermal link is different for each absorber. This allows attributing a detector event within this group to the corresponding absorber by discriminating the signal rise times. A novel readout scheme further allows reading out all 1024 temperature sensors that are arranged in a 32 × 32 square array using only 16+16 current-sensing superconducting quantum interference devices. Numerical calculations taking into account a simplified detector model predict an energy resolution of Δ E_mathrm {FWHM} le 80 mathrm {eV} for all pixels of this detector. 18. Modeling a novel CCHP system including solar and wind renewable energy resources and sizing by a CC-MOPSO algorithm International Nuclear Information System (INIS) Soheyli, Saman; Shafiei Mayam, Mohamad Hossein; Mehrjoo, Mehri 2016-01-01 Highlights: • Considering renewable energy resources as the main prime movers in CCHP systems. • Simultaneous application of FEL and FTL by optimizing two probability functions. • Simultaneous optimization the equipment and penalty factors by CC-MOPSO algorithm. • Reducing fuel consumption and pollution up to 263 and 353 times, respectively. - Abstract: Due to problems, such as, heat losses of equipment, low energy efficiency, increasing pollution and the fossil fuels consumption, combined cooling, heating, and power (CCHP) systems have attracted lots of attention during the last decade. In this paper, for minimizing fossil fuel consumption and pollution, a novel CCHP system including photovoltaic (PV) modules, wind turbines, and solid oxide fuel cells (SOFC) as the prime movers is considered. Moreover, in order to minimize the excess electrical and heat energy production of the CCHP system and so reducing the need for the local power grid and any auxiliary heat production system, following electrical load (FEL) and following thermal load (FTL) operation strategies are considered, simultaneously. In order to determine the optimal number of each system component and also set the penalty factors in the used penalty function, a co-constrained multi objective particle swarm optimization (CC-MOPSO) algorithm is applied. Utilization of the renewable energy resources, the annual total cost (ATC) and the CCHP system area are considered as the objective functions. It also includes constraints such as, loss of power supply probability (LPSP), loss of heat supply probability (LHSP), state of battery charge (SOC), and the number of each CCHP component. A hypothetical hotel in Kermanshah, Iran is conducted to verify the feasibility of the proposed system. 10 wind turbines, 430 PV modules, 11 SOFCs, 106 batteries and 2 heat storage tanks (HST) are numerical results for the spring as the best season in terms of decreasing cost and fuel consumption. Comparing the results 19. Development and Implementation of a Battery-Electric Light-Duty Class 2a Truck including Hybrid Energy Storage Science.gov (United States) Kollmeyer, Phillip J. This dissertation addresses two major related research topics: 1) the design, fabrication, modeling, and experimental testing of a battery-electric light-duty Class 2a truck; and 2) the design and evaluation of a hybrid energy storage system (HESS) for this and other vehicles. The work begins with the determination of the truck's peak power and wheel torque requirements (135kW/4900Nm). An electric traction system is then designed that consists of an interior permanent magnet synchronous machine, two-speed gearbox, three-phase motor drive, and LiFePO4 battery pack. The battery pack capacity is selected to achieve a driving range similar to the 2011 Nissan Leaf electric vehicle (73 miles). Next, the demonstrator electric traction system is built and installed in the vehicle, a Ford F150 pickup truck, and an extensive set of sensors and data acquisition equipment is installed. Detailed loss models of the battery pack, electric traction machine, and motor drive are developed and experimentally verified using the driving data. Many aspects of the truck's performance are investigated, including efficiency differences between the two-gear configuration and the optimal gear selection. The remainder focuses on the application of battery/ultracapacitor hybrid energy storage systems (HESS) to electric vehicles. First, the electric truck is modeled with the addition of an ultracapacitor pack and a dc/dc converter. Rule-based and optimal battery/ultracapacitor power-split control algorithms are then developed, and the performance improvements achieved for both algorithms are evaluated for operation at 25°C. The HESS modeling is then extended to low temperatures, where battery resistance increases substantially. To verify the accuracy of the model-predicted results, a scaled hybrid energy storage system is built and the system is tested for several drive cycles and for two temperatures. The HESS performance is then modeled for three variants of the vehicle design, including the 20. 75 FR 52528 - FC Landfill Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-08-26 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission FC Landfill Energy, LLC; Supplemental Notice That Initial Market- Based Rate... notice in the above-referenced proceeding, of FC Landfill Energy, LLC's application for market-based rate... 1. 75 FR 61747 - Discount Energy Group, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2010-10-06 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Discount Energy Group, LLC; Supplemental Notice That Initial Market-Based... supplemental notice in the above-referenced proceeding of Discount Energy Group, LLC's application for market... 2. 77 FR 66976 - Star Energy Partners LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-11-08 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Star Energy Partners LLC; Supplemental Notice That Initial Market-Based Rate...-referenced proceeding of Star Energy Partners LLC's application for market-based rate authority, with an... 3. 75 FR 59260 - HOP Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Science.gov (United States) 2010-09-27 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission HOP Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing... the above-referenced proceeding of HOP Energy, LLC's application for market-based rate authority, with... 4. 77 FR 47624 - Escanaba Green Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2012-08-09 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Escanaba Green Energy, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding, of Escanaba Green Energy, LLC's application for market-based rate authority... 5. 76 FR 52326 - Green Mountain Energy Company; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2011-08-22 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Green Mountain Energy Company; Supplemental Notice That Initial Market-Based... above-referenced proceeding of Green Mountain Energy Company's application for market-based rate... 6. 75 FR 59259 - Turner Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request... Science.gov (United States) 2010-09-27 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Turner Energy, LLC; Supplemental Notice That Initial Market-Based Rate... notice in the above-referenced proceeding of Turner Energy, LLC's application for market-based rate... 7. 78 FR 4143 - Energy Storage Holdings, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2013-01-18 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Energy Storage Holdings, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding, of Energy Storage Holdings, LLC's application for market-based rate... 8. Material decomposition through weighted imaged subtraction in dual-energy spectral mammography with an energy-resolved photon-counting detector using Monte Carlo Simulation Energy Technology Data Exchange (ETDEWEB) Eom, Ji Soo; Kang, Soon Cheol; Lee, Seung Wan [Konyang University, Daejeon (Korea, Republic of) 2017-09-15 Mammography is commonly used for screening early breast cancer. However, mammographic images, which depend on the physical properties of breast components, are limited to provide information about whether a lesion is malignant or benign. Although a dual-energy subtraction technique decomposes a certain material from a mixture, it increases radiation dose and degrades the accuracy of material decomposition. In this study, we simulated a breast phantom using attenuation characteristics, and we proposed a technique to enable the accurate material decomposition by applying weighting factors for the dual-energy mammography based on a photon-counting detector using a Monte Carlo simulation tool. We also evaluated the contrast and noise of simulated breast images for validating the proposed technique. As a result, the contrast for a malignant tumor in the dual-energy weighted subtraction technique was 0.98 and 1.06 times similar than those in the general mammography and dual-energy subtraction techniques, respectively. However the contrast between malignant and benign tumors dramatically increased 13.54 times due to the low contrast of a benign tumor. Therefore, the proposed technique can increase the material decomposition accuracy for malignant tumor and improve the diagnostic accuracy of mammography. 9. Resolving the Distribution of Energy Critical Elements in Ore Systems through in situ Chemical mapping of Mineral Phases Science.gov (United States) McClenaghan, Sean H. 2017-04-01 The mineral sphalerite is found in a wide-range of ore forming conditions including sedimentary and volcanogenic massive sulphides, as well as epigenetic mineralization associated with intrusive settings such as porphyries, skarns and epithermal veins. Sphalerite is a known host for In, Sn, Ge, Te, and Ga; these represent valuable commodities increasing the value of Zn production worldwide. These elements along with their deleterious counterparts Se, Hg, Tl, and Cd can reveal much about the genesis and evolution of a mineralizing system. From the standpoint of understanding the genesis of various ore systems, mineral chemistry, in particular the accommodation of trace elements in the sphalerite structure, is an ideal proxy for comparing both inter- and intra-deposit variations in hydrothermal geochemistry as well as enabling broad comparisons across a wide spectrum of mineral deposit types. The mineral chemistry of sphalerite will often differ between deposits of an ore district and can even exhibit considerable variability across individual mineral grains in response to evolving hydrothermal fluids and distinct fluid sources. Recent improvements in the field of in situ microanalysis have coupled advances in ICP-MS technology with newer classes of UV Excimer lasers and sample cells with smaller active volumes. This has effectively decreased the amount of ablated material required for analysis, allowing for more discrete analyses and permitting micro-chemical mapping at much smaller scales (important to note that while bulk analyses remain a good estimate of bulk metal contents, they do not portray the heterogeneous nature of trace elements in mineral systems, which could indicate the fertility of a system and the delineation of vein sphalerite enriched in ECE's. 10. Optimal fuzzy logic-based PID controller for load-frequency control including superconducting magnetic energy storage units International Nuclear Information System (INIS) Pothiya, Saravuth; Ngamroo, Issarachai 2008-01-01 This paper proposes a new optimal fuzzy logic-based-proportional-integral-derivative (FLPID) controller for load frequency control (LFC) including superconducting magnetic energy storage (SMES) units. Conventionally, the membership functions and control rules of fuzzy logic control are obtained by trial and error method or experiences of designers. To overcome this problem, the multiple tabu search (MTS) algorithm is applied to simultaneously tune PID gains, membership functions and control rules of FLPID controller to minimize frequency deviations of the system against load disturbances. The MTS algorithm introduces additional techniques for improvement of search process such as initialization, adaptive search, multiple searches, crossover and restarting process. Simulation results explicitly show that the performance of the optimum FLPID controller is superior to the conventional PID controller and the non-optimum FLPID controller in terms of the overshoot, settling time and robustness against variations of system parameters 11. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam International Nuclear Information System (INIS) Marsolat, F; De Marzi, L; Mazal, A; Pouzoulet, F 2016-01-01 In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec , for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec . The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm −1 . These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. (paper) 12. A furnace for the in situ study of the formation of inorganic solids at high temperature using time-resolved energy-dispersive x-ray diffraction Science.gov (United States) Geselbracht, Margret J.; Walton, Richard I.; Cowell, E. Sarah; Millange, Franck; O'Hare, Dermot 2000-11-01 The design, construction, and use of a furnace from which time-resolved x-ray diffraction data may be measured from reacting mixtures of solids or of solids and liquids is described. The furnace is a vertical tube design, constructed from commercially available components, and can operate at temperatures of up to 1000 °C. The apparatus is designed to heat sample tubes of up to 3 cm diameter. The use of high-intensity synchrotron-generated white-beam x rays allows the tube and its contents to be penetrated; thus x-ray diffraction data can be measured from reactions taking place in laboratory-sized reaction vessels. The energy-dispersive diffraction geometry allows rapid data collection (of the order of seconds); hence reactions can be followed continuously in real time. The use of the furnace is demonstrated by results from experiments performed on Station 16.4 of the Daresbury Synchrotron Radiation Source, UK. Two distinct reaction types are studied, both used to prepare the layered perovskite RbCa2Nb3O10: first, a solid state route at 800 °C and second a flux route, performed in molten RbCl, also at 800 °C. 13. Loop-driven graphical unitary group approach to the electron correlation problem, including configuration interaction energy gradients International Nuclear Information System (INIS) Brooks, B.R. 1979-09-01 The Graphical Unitary Group Approach (GUGA) was cast into an extraordinarily powerful form by restructuring the Hamiltonian in terms of loop types. This restructuring allows the adoption of the loop-driven formulation which illuminates vast numbers of previously unappreciated relationships between otherwise distinct Hamiltonian matrix elements. The theoretical/methodological contributions made here include the development of the loop-driven formula generation algorithm, a solution of the upper walk problem used to develop a loop breakdown algorithm, the restriction of configuration space employed to the multireference interacting space, and the restructuring of the Hamiltonian in terms of loop types. Several other developments are presented and discussed. Among these developments are the use of new segment coefficients, improvements in the loop-driven algorithm, implicit generation of loops wholly within the external space adapted within the framework of the loop-driven methodology, and comparisons of the diagonalization tape method to the direct method. It is also shown how it is possible to implement the GUGA method without the time-consuming full (m 5 ) four-index transformation. A particularly promising new direction presented here involves the use of the GUGA methodology to obtain one-electron and two-electron density matrices. Once these are known, analytical gradients (first derivatives) of the CI potential energy are easily obtained. Several test calculations are examined in detail to illustrate the unique features of the method. Also included is a calculation on the asymmetric 2 1 A' state of SO 2 with 23,613 configurations to demonstrate methods for the diagonalization of very large matrices on a minicomputer. 6 figures, 6 tables 14. Analysis of Implementing Lifetime Energy Cost, Including Fully Burdened Cost of Fuel and Energy Footprints of Contractors, as Mandatory Decision Factors in Navy Acquisition Science.gov (United States) 2010-06-01 Cost Of Energy, Energy Efficiency, Energy Footprint, Mandatory Evaluation Factors, Navy Acquisition, Energy Management Systems, Corporate Social Responsibility 16...Chairman of the Joint Chiefs of Staff CPG Comprehensive Procurement Guidelines CSR Corporate Social Responsibility DAG Defense Acquisition... corporate social responsibility (CSR), in the pursuit of maximizing profit, corporations are incentivized, at least theoretically, to produce their goods 15. MSTor: A program for calculating partition functions, free energies, enthalpies, entropies, and heat capacities of complex molecules including torsional anharmonicity Science.gov (United States) Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G. 2012-08-01 processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi 16. High Flux Energy-Resolved Photon-Counting X-Ray Imaging Arrays with CdTe and CdZnTe for Clinical CT International Nuclear Information System (INIS) Barber, William C.; Hartsough, Neal E.; Gandhi, Thulasidharan; Iwanczyk, Jan S.; Wessel, Jan C.; Nygard, Einar; Malakhov, Nail; Wawrzyniak, Gregor; Dorholt, Ole; Danielsen, Roar 2013-06-01 We have fabricated fast room-temperature energy dispersive photon counting x-ray imaging arrays using pixellated cadmium zinc (CdTe) and cadmium zinc telluride (CdZnTe) semiconductors. We have also fabricated fast application specific integrated circuits (ASICs) with a two dimensional (2D) array of inputs for readout from the CdZnTe sensors. The new CdTe and CdZnTe sensors have a 2D array of pixels with a 0.5 mm pitch and can be tiled in 2D. The new 2D ASICs have four energy discriminators per pixel with a linear energy response across the entire dynamic range for clinical CT. The ASICs can also be tiled in 2D and are designed to fit within the active area of the 2D sensors. We have measured several important performance parameters including; an output count rate (OCR) in excess of 20 million counts per second per square mm, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor less than 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdTE and CdZnTe sensors incurring very little additional capacitance. We present a comparison of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, and noise floor. (authors) 17. A comparative transmission electron microscopy, energy dispersive x-ray spectroscopy and spatially resolved micropillar compression study of the yttria partially stabilised zirconia - porcelain interface in dental prosthesis Energy Technology Data Exchange (ETDEWEB) Lunt, Alexander J.G., E-mail: alexander.lunt@chch.ox.ac.uk [Department of Engineering Science, University of Oxford, Parks Road, Oxford, Oxfordshire OX1 3PJ (United Kingdom); Mohanty, Gaurav, E-mail: gaurav.mohanty@empa.ch [EMPA Materials Science & Technology, Feuerwerkerstrasse 39, CH-3602 Thun (Switzerland); Ying, Siqi, E-mail: siqi.ying@eng.ox.ac.uk [Department of Engineering Science, University of Oxford, Parks Road, Oxford, Oxfordshire OX1 3PJ (United Kingdom); Dluhoš, Jiří, E-mail: jiri.dluhos@tescan.cz [TESCAN Brno, s.r.o., Libušina tř. 1, 623 00 Brno-Kohoutovice (Czech Republic); Sui, Tan, E-mail: tan.sui@eng.ox.ac.uk [Department of Engineering Science, University of Oxford, Parks Road, Oxford, Oxfordshire OX1 3PJ (United Kingdom); Neo, Tee K., E-mail: neophyte@singnet.com.sg [Specialist Dental Group, Mount Elizabeth Orchard, 3 Mount Elizabeth, #08-03/08-08/08-10, 228510 (Singapore); Michler, Johann, E-mail: johann.michler@empa.ch [EMPA Materials Science & Technology, Feuerwerkerstrasse 39, CH-3602 Thun (Switzerland); Korsunsky, Alexander M., E-mail: alexander.korsunsky@eng.ox.ac.uk [Department of Engineering Science, University of Oxford, Parks Road, Oxford, Oxfordshire OX1 3PJ (United Kingdom) 2015-12-01 μm. - Highlights: • Cross section of yttria partially stabilised zirconia (YPSZ)–porcelain prosthesis • Energy dispersive X-ray spectroscopy shows 2–6 μm elemental diffusion zone. • Transmission electron microscopy shows voids in near interface porcelain. • Complex near interface YPSZ microstructure shows grains embedded in porcelain. • Spatially resolved micropillar compression reveals modulus and strength variation. 18. Dynamical observation of lithium insertion/extraction reaction during charge-discharge processes in Li-ion batteries by in situ spatially resolved electron energy-loss spectroscopy. Science.gov (United States) Shimoyamada, Atsushi; Yamamoto, Kazuo; Yoshida, Ryuji; Kato, Takehisa; Iriyama, Yasutoshi; Hirayama, Tsukasa 2015-12-01 All-solid-state Li-ion batteries (LIBs) with solid electrolytes are expected to be the next generation devices to overcome serious issues facing conventional LIBs with liquid electrolytes. However, the large Li-ion transfer resistance at the electrode/solid-electrolyte interfaces causes low power density and prevents practical use. In-situ-formed negative electrodes prepared by decomposing the solid electrolyte Li(1+x+3z)Alx(Ti,Ge)(2-x)Si(3z)P(3-z)O12 (LASGTP) with an excess Li-ion insertion reaction are effective electrodes providing low Li-ion transfer resistance at the interfaces. Prior to our work, however, it had still been unclear how the negative electrodes were formed in the parent solid electrolytes. Here, we succeeded in dynamically visualizing the formation by in situ spatially resolved electron energy-loss spectroscopy in a transmission electron microscope mode (SR-TEM-EELS). The Li-ions were gradually inserted into the solid electrolyte region around 400 nm from the negative current-collector/solid-electrolyte interface in the charge process. Some of the ions were then extracted in the discharge process, and the rest were diffused such that the distribution was almost flat, resulting in the negative electrodes. The redox reaction of Ti(4+)/Ti(3+) in the solid electrolyte was also observed in situ during the Li insertion/extraction processes. The in situ SR-TEM-EELS revealed the mechanism of the electrochemical reaction in solid-state batteries. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 19. Development of a Time-Resolved Fluorescence Resonance Energy Transfer Ultrahigh-Throughput Screening Assay for Targeting the NSD3 and MYC Interaction. Science.gov (United States) Xiong, Jinglin; Pecchi, Valentina Gonzalez; Qui, Min; Ivanov, Andrey A; Mo, Xiulei; Niu, Qiankun; Chen, Xiang; Fu, Haian; Du, Yuhong Epigenetic modulators play critical roles in reprogramming of cellular functions, emerging as a new class of promising therapeutic targets. Nuclear receptor binding SET domain protein 3 (NSD3) is a member of the lysine methyltransferase family. Interestingly, the short isoform of NSD3 without the methyltransferase fragment, NSD3S, exhibits oncogenic activity in a wide range of cancers. We recently showed that NSD3S interacts with MYC, a central regulator of tumorigenesis, suggesting a mechanism by which NSD3S regulates cell proliferation through engaging MYC. Thus, small molecule inhibitors of the NSD3S/MYC interaction will be valuable tools for understanding the function of NSD3 in tumorigenesis for potential cancer therapeutic discovery. Here we report the development of a cell lysate-based time-resolved fluorescence resonance energy transfer (TR-FRET) assay in an ultrahigh-throughput screening (uHTS) format to monitor the interaction of NSD3S with MYC. In our TR-FRET assay, anti-Flag-terbium and anti-glutathione S-transferase (GST)-d2, a paired fluorophores, were used to indirectly label Flag-tagged NSD3 and GST-MYC in HEK293T cell lysates. This TR-FRET assay is robust in a 1,536-well uHTS format, with signal-to-background >8 and a Z' factor >0.7. A pilot screening with the Spectrum library of 2,000 compounds identified several positive hits. One positive compound was confirmed to disrupt the NSD3/MYC interaction in an orthogonal protein-protein interaction assay. Thus, our optimized uHTS assay could be applied to future scaling up of a screening campaign to identify small molecule inhibitors targeting the NSD3/MYC interaction. 20. Development of Lab-to-Fab Production Equipment Across Several Length Scales for Printed Energy Technologies, Including Solar Cells DEFF Research Database (Denmark) Hösel, Markus; Dam, Henrik Friis; Krebs, Frederik C 2015-01-01 We describe and review how the scaling of printed energy technologies not only requires scaling of the input materials but also the machinery used in the processes. The general consensus that ultrafast processing of technologies with large energy capacity can only be realized using roll-to-roll m......We describe and review how the scaling of printed energy technologies not only requires scaling of the input materials but also the machinery used in the processes. The general consensus that ultrafast processing of technologies with large energy capacity can only be realized using roll...... the lower end of the industrial scale. The machinery bridges the gap through firstly achieving improved ink efficiency without surface contact, followed by better ink efficiency at higher speeds, and finally large-area processing at high speed with very high ink efficiency.... 1. Time-resolved vibrational spectroscopy Energy Technology Data Exchange (ETDEWEB) Tokmakoff, Andrei [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Champion, Paul [Northeastern Univ., Boston, MA (United States); Heilweil, Edwin J. [National Inst. of Standards and Technology (NIST), Boulder, CO (United States); Nelson, Keith A. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Ziegler, Larry [Boston Univ., MA (United States) 2009-05-14 This document contains the Proceedings from the 14th International Conference on Time-Resolved Vibrational Spectroscopy, which was held in Meredith, NH from May 9-14, 2009. The study of molecular dynamics in chemical reaction and biological processes using time-resolved spectroscopy plays an important role in our understanding of energy conversion, storage, and utilization problems. Fundamental studies of chemical reactivity, molecular rearrangements, and charge transport are broadly supported by the DOE's Office of Science because of their role in the development of alternative energy sources, the understanding of biological energy conversion processes, the efficient utilization of existing energy resources, and the mitigation of reactive intermediates in radiation chemistry. In addition, time-resolved spectroscopy is central to all fiveof DOE's grand challenges for fundamental energy science. The Time-Resolved Vibrational Spectroscopy conference is organized biennially to bring the leaders in this field from around the globe together with young scientists to discuss the most recent scientific and technological advances. The latest technology in ultrafast infrared, Raman, and terahertz spectroscopy and the scientific advances that these methods enable were covered. Particular emphasis was placed on new experimental methods used to probe molecular dynamics in liquids, solids, interfaces, nanostructured materials, and biomolecules. 2. Does low-energy sweetener consumption affect energy intake and body weight? A systematic review, including meta-analyses, of the evidence from human and animal studies NARCIS (Netherlands) Rogers, P.J.; Hogenkamp, P.S.; Graaf, de Kees; Higgs, S.; Lluch, A.; Ness, A.R.; Penfold, C.; Perry, R.; Putz, P.; Yeomans, M.R.; Mela, D.J. 2016-01-01 By reducing energy density, low-energy sweeteners (LES) might be expected to reduce energy intake (EI) and body weight (BW). To assess the totality of the evidence testing the null hypothesis that LES exposure (versus sugars or unsweetened alternatives) has no effect on EI or BW, we conducted a 3. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype Science.gov (United States) Steadman, Roger; Herrmann, Christoph; Livne, Amir 2017-08-01 Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources. 4. An adaptive load dispatching and forecasting strategy for a virtual power plant including renewable energy conversion units International Nuclear Information System (INIS) Tascikaraoglu, A.; Erdinc, O.; Uzunoglu, M.; Karakas, A. 2014-01-01 Highlights: • Feasibility of virtual power plant concept for electricity market participation. • An economic operation based adaptive load dispatching strategy. • A new meteorological data forecasting algorithm. • Long term scheduling of virtual power plant components. - Abstract: The increasing awareness on the risky state of conventional energy sources in terms of future energy supply security and health of environment has promoted the research activities on alternative energy systems. However, due to the fact that the power production of main alternative sources such as wind and solar is directly related with meteorological conditions, these sources should be combined with dispatchable energy sources in a hybrid combination in order to ensure security of demand supply. In this study, the evaluation of such a hybrid system consisting of wind, solar, hydrogen and thermal power systems in the concept of virtual power plant strategy is realized. An economic operation-based load dispatching strategy that can interactively adapt to the real measured wind and solar power production values is proposed. The adaptation of the load dispatching algorithm is provided by the update mechanism employed in the meteorological condition forecasting algorithms provided by the combination of Empirical Mode Decomposition, Cascade-Forward Neural Network and Linear Model through a fusion strategy. Thus, the effects of the stochastic nature of solar and wind energy systems are better overcome in order to participate in the electricity market with higher benefits 5. A neural network potential energy surface for the F + CH4reaction including multiple channels based on coupled cluster theory. Science.gov (United States) Chen, Jun; Xu, Xin; Liu, Shu; Zhang, Dong H 2018-03-22 We report here a new global and full dimensional potential energy surface (PES) for the F + CH4 reaction. This PES was constructed by using neural networks (NN) fitting to about 99 000 ab initio energies computed at the UCCSD(T)-F12a/aug-cc-pVTZ level of theory, and the correction terms considering the influence of a larger basis set as well as spin-orbit couplings were further implemented with a hierarchial scheme. This PES, covering both the abstraction and substitution channels, has an overall fitting error of 8.24 meV in total, and 4.87 meV for energies within 2.5 eV using a segmented NN fitting method, and is more accurate than the previous PESs. 6. Total cross-sections for reactions of high energy particles (including elastic, topological, inclusive and exclusive reactions). Subvol. b International Nuclear Information System (INIS) Schopper, H.; Moorhead, W.G.; Morrison, D.R.O. 1988-01-01 The aim of this report is to present a compilation of cross-sections (i.e. reaction rates) of elementary particles at high energy. The data are presented in the form of tables, plots and some fits, which should be easy for the reader to use and may enable him to estimate cross-sections for presently unmeasured energies. We have analyzed all the data published in the major Journals and Reviews for momenta of the incoming particles larger than ≅ 50 MeV/c, since the early days of elementary particle physics and, for each reaction, we have selected the best cross-section data available. We have restricted our attention to integrated cross-sections, such as total cross-sections, exclusive and inclusive cross-sections etc., at various incident beam energies. We have disregarded data affected by geometrical and/or kinematical cuts which would make them not directly comparable to other data at different energies. Also, in the case of exclusive reactions, we have left out data where not all of the particles in the final state were unambiguously identified. This work contains reactions induced by neutrinos, gammas, charged pions, kaons, nucleons, antinucleons and hyperons. (orig./HSI) 7. The choice of primary energy source including PV installation for providing electric energy to a public utility building - a case study Science.gov (United States) Radomski, Bartosz; Ćwiek, Barbara; Mróz, Tomasz M. 2017-11-01 The paper presents multicriteria decision aid analysis of the choice of PV installation providing electric energy to a public utility building. From the energy management point of view electricity obtained by solar radiation has become crucial renewable energy source. Application of PV installations may occur a profitable solution from energy, economic and ecologic point of view for both existing and newly erected buildings. Featured variants of PV installations have been assessed by multicriteria analysis based on ANP (Analytic Network Process) method. Technical, economical, energy and environmental criteria have been identified as main decision criteria. Defined set of decision criteria has an open character and can be modified in the dialog process between the decision-maker and the expert - in the present case, an expert in planning of development of energy supply systems. The proposed approach has been used to evaluate three variants of PV installation acceptable for existing educational building located in Poznań, Poland - the building of Faculty of Chemical Technology, Poznań University of Technology. Multi-criteria analysis based on ANP method and the calculation software Super Decisions has proven to be an effective tool for energy planning, leading to the indication of the recommended variant of PV installation in existing and newly erected public buildings. Achieved results show prospects and possibilities of rational renewable energy usage as complex solution to public utility buildings. 8. Energy conservation in the Netherlands 1995-2006. Including decomposition of the energy consumption trend; Energiebesparing in Nederland 1995-2007. Inclusief decompositie energieverbruikstrend Energy Technology Data Exchange (ETDEWEB) Gerdes, J.; Boonekamp, P.G.M. [ECN Beleidsstudies, Petten (Netherlands); Vreuls, H. [SenterNovem, Utrecht (Netherlands); Verdonk, M. [Planbureau voor de Leefomgeving PBL, Bilthoven (Netherlands); Pouwelse, J.W. [Centraal Planbureau CPB, Den Haag (Netherlands) 2009-08-15 Realized energy savings in the Netherlands for the period 1995-2007 are presented for the sectors households, industry, agriculture, services, transport, refineries and electricity, and on a national level. The figures on energy savings are based on the 'Protocol Monitoring Energy Savings', a common methodology and database for calculating energy savings. Results are presented for savings on final energy use, conversion in end-use sectors (co-generation) and conversion in the energy sector. National savings for the period 1995-2007 equal 0.9% per year on average, with a decreasing tendency in recent years. Continuing the trends of last year, the highest figure for end-use sectors is found for agriculture (2.6%) and the lowest figure for transport (0.1%). An uncertainty analysis reveals that the margin for the national savings figure is {+-}0.3 percent-point. At the request of PBL, a decomposition of the change in energy use into 14 different factors has been conducted. This shows that the growth of energy use from 1995 to 2007, if no savings would have been achieved, would have been almost twice as high. [Dutch] In dit rapport worden de energiebesparingcijfers gepresenteerd voor de periode 1995-2007, berekend volgens het Protocol Monitoring Energiebesparing (PME). De besparing wordt berekend voor de verbruiksectoren industrie, huishoudens, transport, land- en tuinbouw, diensten en raffinaderijen, de elektriciteitscentrales en het nationale niveau. 9. Calculations of the one-body electronic structure of the strongly correlated systems including self-energy effects Energy Technology Data Exchange (ETDEWEB) Costa-Quintana, J.; Sanchez-Lopez, M.M.; Lopez-Aguilar, F. [Grup dElectromagnetisme, Edifici Cn, Universitat Autonoma de Barcelona 08193, Bellaterra, Barcelona (Spain) 1996-10-01 We give a method to obtain the quasiparticle band structure and renormalized density of states by diagonalizing the interacting system Green function. This method operates for any self-energy approximation appropriated to strongly correlated systems. Application to CeSi{sub 2} and YBa{sub 2}Cu{sub 3}O{sub 7} is analyzed as a probe for this band calculation method. {copyright} {ital 1996 The American Physical Society.} 10. 48 CFR 1552.239-103 - Acquisition of Energy Star Compliant Microcomputers, Including Personal Computers, Monitors and... Science.gov (United States) 2010-10-01 ... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products... 11. Ernest Orlando Lawrence Awards Ceremony for 2011 Award Winners (Presentations, including remarks by Energy Secretary, Dr. Steven Chu) International Nuclear Information System (INIS) Chu, Steven 2012-01-01 The winners for 2011 of the Department of Energy's Ernest Orlando Lawrence Award were recognized in a ceremony held May 21, 2012. Dr. Steven Chu and others spoke of the importance of the accomplishments and the prestigious history of the award. The recipients of the Ernest Orlando Lawrence Award for 2011 are: Riccardo Betti (University of Rochester); Paul C. Canfield (Ames Laboratory); Mark B. Chadwick (Los Alamos National Laboratory); David E. Chavez (Los Alamos National Laboratory); Amit Goyal (Oak Ridge National Laboratory); Thomas P. Guilderson (Lawrence Livermore National Laboratory); Lois Curfman McInnes (Argonne National Laboratory); Bernard Matthew Poelker (Thomas Jeffereson National Accelerator Facility); and Barry F. Smith (Argonne National Laboratory). 12. TeV-scale jet energy calibration using multijet events including close-by jet effects at the ATLAS experiment CERN Document Server The ATLAS collaboration 2013-01-01 With the large number of proton-proton collisions delivered by the Large Hadron Collider at a centre-of-mass energy of $\\sqrt{s}=7$ TeV in 2011, it became possible to probe the jet transverse momentum (pT) scale beyond the TeV range in events with multijet production. The jet energy scale (JES) uncertainty, which is one of the most important sources of systematic uncertainties for new physics searches at high pT, is evaluated using in-situ techniques based on the pT balance in events with a photon or $Z$ boson as well as in dijet events. Exploiting the pT balance technique between a system of low-pT jets and a leading jet at high pT in multijet events, with the calibration (provided by the gamma-jet and Z+jet events) applied to the low-pT jets, allows the extension of the in-situ determination of JES calibration and uncertainty to the TeV-scale. Results are presented for the JES uncertainty using the multijet balance technique based on the ATLAS data collected in 2011 corresponding to an integrated luminosity... 13. Time-resolved resonance Raman spectroscopy of 1,3,5-hexatrienes in the lowest excited triplet state. The potential energy surface in T1 NARCIS (Netherlands) Wilbrandt, R.; Langkilde, F.W.; Brouwer, A.M.; Negri, F.; Orlandi, G. 1990-01-01 Time-resolved resonance Raman spectroscopy is applied to the study of the T1 state of 1,3,5-hexatriene and deuteriated and methylated derivatives in solution. The technique is described briefly. The experimentally obtained resonance Raman spectra are discussed in the light of theoretical Quantum 14. Energy Metabolism of the Brain, Including the Cooperation between Astrocytes and Neurons, Especially in the Context of Glycogen Metabolism. Science.gov (United States) Falkowska, Anna; Gutowska, Izabela; Goschorska, Marta; Nowacki, Przemysław; Chlubek, Dariusz; Baranowska-Bosiacka, Irena 2015-10-29 Glycogen metabolism has important implications for the functioning of the brain, especially the cooperation between astrocytes and neurons. According to various research data, in a glycogen deficiency (for example during hypoglycemia) glycogen supplies are used to generate lactate, which is then transported to neighboring neurons. Likewise, during periods of intense activity of the nervous system, when the energy demand exceeds supply, astrocyte glycogen is immediately converted to lactate, some of which is transported to the neurons. Thus, glycogen from astrocytes functions as a kind of protection against hypoglycemia, ensuring preservation of neuronal function. The neuroprotective effect of lactate during hypoglycemia or cerebral ischemia has been reported in literature. This review goes on to emphasize that while neurons and astrocytes differ in metabolic profile, they interact to form a common metabolic cooperation. 15. FY 1997 report on the field survey on country situations including efficient energy consumption. Vietnam; 1997 nendo chosa hokokusho (energy shohi koritsuka nado chiiki josei genchi chosa). Vietnam Energy Technology Data Exchange (ETDEWEB) NONE 1998-03-01 Field survey was made on the current state of and issues on energy in Vietnam. In Vietnam, firewood is in wide use as non-commercial energy, and sums to a half of total energy consumption. Other energies such as hydroelectric power, petroleum, natural gas and coal are self-sustainable. Commercial energy consumption in 1995 is estimated at 10,070,000t in oil equivalent, which is broken down into 23% for coal, 42% in oil, 5% for natural gas and 30% for electricity. Abundant water resources will form the mainstay of future electric power supply. Commercial production of oil started in 1986 becoming an oil exporting country. Several promising natural gas fields were discovered as the result of the exploration by foreign capital. Coal deposits are estimated to be nearly 3.5 billion tons, and most of them are anthracite. Electric power demand is growing at a higher rate than the economic growth of Vietnam. The growth rate of electric power demand is set to be 1.3 times that of GDP. Since construction funds for new plants cannot be satisfied with the national budget and domestic investment alone, the country is expecting foreign capitals. 21 figs., 36 tabs. 16. Study Modules for Calculus-Based General Physics. [Includes Modules 6 and 7: Work and Energy; Applications of Newton's Laws]. Science.gov (United States) Fuller, Robert G., Ed.; And Others This is part of a series of 42 Calculus Based Physics (CBP) modules totaling about 1,000 pages. The modules include study guides, practice tests, and mastery tests for a full-year individualized course in calculus-based physics based on the Personalized System of Instruction (PSI). The units are not intended to be used without outside materials;… 17. 78 FR 20910 - Hess Energy Marketing, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes... Science.gov (United States) 2013-04-08 ... Marketing, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... Marketing, LLC's application for market-based rate authority, with an accompanying rate schedule, noting... interventions in lieu of paper, using the FERC Online links at http://www.ferc.gov . To facilitate electronic... 18. Hawaii Energy Resource Overviews. Volume 4. Impact of geothermal resource development in Hawaii (including air and water quality) Energy Technology Data Exchange (ETDEWEB) Siegel, S.M.; Siegel, B.Z. 1980-06-01 The environmental consequences of natural processes in a volcanic-fumerolic region and of geothermal resource development are presented. These include acute ecological effects, toxic gas emissions during non-eruptive periods, the HGP-A geothermal well as a site-specific model, and the geothermal resources potential of Hawaii. (MHR) 19. Time resolved techniques: An overview International Nuclear Information System (INIS) Larson, B.C.; Tischler, J.Z. 1990-06-01 Synchrotron sources provide exceptional opportunities for carrying out time-resolved x-ray diffraction investigations. The high intensity, high angular resolution, and continuously tunable energy spectrum of synchrotron x-ray beams lend themselves directly to carrying out sophisticated time-resolved x-ray scattering measurements on a wide range of materials and phenomena. When these attributes are coupled with the pulsed time-structure of synchrotron sources, entirely new time-resolved scattering possibilities are opened. Synchrotron beams typically consist of sub-nanosecond pulses of x-rays separated in time by a few tens of nanoseconds to a few hundred nanoseconds so that these beams appear as continuous x-ray sources for investigations of phenomena on time scales ranging from hours down to microseconds. Studies requiring time-resolution ranging from microseconds to fractions of a nanosecond can be carried out in a triggering mode by stimulating the phenomena under investigation in coincidence with the x-ray pulses. Time resolution on the picosecond scale can, in principle, be achieved through the use of streak camera techniques in which the time structure of the individual x-ray pulses are viewed as quasi-continuous sources with ∼100--200 picoseconds duration. Techniques for carrying out time-resolved scattering measurements on time scales varying from picoseconds to kiloseconds at present and proposed synchrotron sources are discussed and examples of time-resolved studies are cited. 17 refs., 8 figs 20. Polarization phenomena in nucleon-nucleon scattering at intermediate and high energies including the present status of dibaryons Energy Technology Data Exchange (ETDEWEB) Yokosawa, A. 1985-01-01 We review experimental results concerning polarization phenomena in nucleon-nucleon scattering in which both the elastic scattering and hadron-production reaction are included. We also present summary of S = 0 dibaryon resonances and candidates by reviewing experimental data in the nucleon-nucleon system, ..gamma..d channel, ..pi..d elastic scattering, pp ..-->.. ..pi..d channel, deuteron break-up reactions, and narrow structures in missing-mass spectra. 93 refs., 26 figs. 1. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1989-06-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (January--March 1989) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. Also included are a number of enforcement actions that had been previously resolved but not published in this NUREG. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 2. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1990-05-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (January--March 1990) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. Also included are a number of enforcement actions that had been previously resolved but not published in this NUREG. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 3. Bioenergy production from perennial energy crops: A consequential LCA of 12 bioenergy scenarios including land use changes DEFF Research Database (Denmark) Tonini, Davide; Hamelin, Lorie; Wenzel, Henrik 2012-01-01 In the endeavor of optimizing the sustainability of bioenergy production in Denmark, this consequential life cycle assessment (LCA) evaluated the environmental impacts associated with the production of heat and electricity from one hectare of Danish arable land cultivated with three perennial crops...... and IV) co-firing in large scale coal-fired CHP plants. Soil carbon changes, direct and indirect land use changes as well as uncertainty analysis (sensitivity, MonteCarlo) were included in the LCA. Results showed that global warming was the bottleneck impact, where only two scenarios, namely willow... 4. Time resolved spectroscopic studies on some nanophosphors Wintec . 1. Introduction. Time resolved spectroscopy is an important tool for study- ing energy and charge transfer processes, coupling of electronic and vibrational degrees of freedom, vibrational and conformational relaxation, isomerization, etc. The. 5. Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis Energy Technology Data Exchange (ETDEWEB) NONE 1995-07-01 The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards. 6. Time-resolved studies International Nuclear Information System (INIS) Mills, D.M. 1992-01-01 When new or more powerful probes become available that offer both shorter data-collection times and the opportunity to apply innovative approaches to established techniques, it is natural that investigators consider the feasibility of exploring the kinetics of time-evolving systems. This stimulating area of research not only can lead to insights into the metastable or excited states that a system may populate on its way to a ground state, but can also lead to a better understanding of that final state. Synchrotron radiation, with its unique properties, offers just such a tool to extend X-ray measurements from the static to the time-resolved regime. The most straight-forward application of synchrotron radiation to the study of transient phenomena is directly through the possibility of decreased data-collection times via the enormous increase in flux over that of a laboratory X-ray system. Even further increases in intensity can be obtained through the use of novel X-ray optical devices. Widebandpass monochromators, e.g., that utilize the continuous spectral distribution of synchrotron radiation, can increase flux on the sample several orders of magnitude over conventional X-ray optical systems thereby allowing a further shortening of the data-collection time. Another approach that uses the continuous spectral nature of synchrotron radiation to decrease data-collection times is the open-quote parallel data collectionclose quotes method. Using this technique, intensities as a function of X-ray energy are recorded simultaneously for all energies rather than sequentially recording data at each energy, allowing for a dramatic decrease in the data-collection time 7. Does low-energy sweetener consumption affect energy intake and body weight? A systematic review, including meta-analyses, of the evidence from human and animal studies. Science.gov (United States) Rogers, P J; Hogenkamp, P S; de Graaf, C; Higgs, S; Lluch, A; Ness, A R; Penfold, C; Perry, R; Putz, P; Yeomans, M R; Mela, D J 2016-03-01 By reducing energy density, low-energy sweeteners (LES) might be expected to reduce energy intake (EI) and body weight (BW). To assess the totality of the evidence testing the null hypothesis that LES exposure (versus sugars or unsweetened alternatives) has no effect on EI or BW, we conducted a systematic review of relevant studies in animals and humans consuming LES with ad libitum access to food energy. In 62 of 90 animal studies exposure to LES did not affect or decreased BW. Of 28 reporting increased BW, 19 compared LES with glucose exposure using a specific 'learning' paradigm. Twelve prospective cohort studies in humans reported inconsistent associations between LES use and body mass index (-0.002 kg m(-)(2) per year, 95% confidence interval (CI) -0.009 to 0.005). Meta-analysis of short-term randomized controlled trials (129 comparisons) showed reduced total EI for LES versus sugar-sweetened food or beverage consumption before an ad libitum meal (-94 kcal, 95% CI -122 to -66), with no difference versus water (-2 kcal, 95% CI -30 to 26). This was consistent with EI results from sustained intervention randomized controlled trials (10 comparisons). Meta-analysis of sustained intervention randomized controlled trials (4 weeks to 40 months) showed that consumption of LES versus sugar led to relatively reduced BW (nine comparisons; -1.35 kg, 95% CI -2.28 to -0.42), and a similar relative reduction in BW versus water (three comparisons; -1.24 kg, 95% CI -2.22 to -0.26). Most animal studies did not mimic LES consumption by humans, and reverse causation may influence the results of prospective cohort studies. The preponderance of evidence from all human randomized controlled trials indicates that LES do not increase EI or BW, whether compared with caloric or non-caloric (for example, water) control conditions. Overall, the balance of evidence indicates that use of LES in place of sugar, in children and adults, leads to reduced EI and BW, and possibly also 8. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis Energy Technology Data Exchange (ETDEWEB) 1990-12-01 The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF) 9. Research report for fiscal 1998. Study of utilization of biomass including foods in energy industry; 1998 nendo shokubutsu nado no biomass no energy riyo ni kansuru chosa hokokusho Energy Technology Data Exchange (ETDEWEB) NONE 1999-03-01 Rice being produced as food is taken up out of various types of biomass, and a feasibility study from the viewpoints of technology and economy is conducted as to its use in the energy industry. The production of ethanol from rice, though it has no past record worth discussion, is similar to the production of ethanol from other biomass resources in terms of technology and economy. The problem is that the production cost of rice is far higher than those of other materials. It is expected, however, that there will a large-scale production cost reduction and an increase in the yield when novel cultivation techniques are introduced in the future. It is also expected that alcohol from rice will be sufficiently competitive with alcohol from molasses or the like when the exploitation of cellulose-family by-products such as husks becomes feasible. The study on this occasion deals solely with the effective use of farmland and the surplus rice. A confrontation between rice as a biomass resource and rice as a food has to be avoided as much as possible in the long term because it may cause a price rise and compromise the security of food supply. That is, in discussing this matter, it is mandatory to draw a very definite line between rice as a food and rice as an alcohol production material. (NEDO) 10. Methods to include the influence of thermal bonds on the calculation of the energy performance of buildings and their influence on the heat demand for building heating Science.gov (United States) Valachova, D.; Zdrazilova, N.; Chudikova, B. 2018-02-01 The paper deals with the effect of thermal bonds on heat transmission of a building envelope. Then it deals with ways to include of thermal bonds in the calculation of heat loss through the building envelope and the calculation of energy efficiency of buildings. A solution of thermal bonds is very important, because it fundamentally influences the energy efficiency of the buildings. It is important to realize that building envelope comprises not only the peripheral surface structures but also thermal bonds in areas where building structures join. 11. Future perspectives for climate action. How economics can prescribe more than an energy charge. An essay on how economics can contribute to resolving the climate problem Energy Technology Data Exchange (ETDEWEB) De Bruyn, S. 2013-07-15 How can economics contribute to designing a 'solution' for the emerging climate crisis? This essay attempts to answer that question by investigating the roots of economic thinking and analyzing the coordination issues that are at the heart of the climate problem. While economics has been a protagonist in climate change debates by providing economic instruments such as tradeable emission permits, it has also been an antagonist by calling into doubt the need for mitigation, the benefits of which were held not to outweigh the costs. This essay argues that climate change is primarily a social equity issue and that economics is a poor science for analyzing such issues. Discussion models in economics and climate change science are fundamentally different, moreover, which means the two disciplines are prone to mutual misunderstanding. Nonetheless, to resolve the climate problem, climate science could well benefit from economic thinking, and especially from theoretical ideas from institutional economics concerning the design of effective policy instruments. 12. Comparison of approaches to Total Quality Management. Including an examination of the Department of Energys position on quality management Energy Technology Data Exchange (ETDEWEB) Bennett, C.T. 1994-03-01 This paper presents a comparison of several qualitatively different approaches to Total Quality Management (TQM). The continuum ranges from management approaches that are primarily standards -- with specific guidelines, but few theoretical concepts -- to approaches that are primarily philosophical, with few specific guidelines. The approaches to TQM discussed in this paper include the International Organization for Standardization (ISO) 9000 Standard, the Malcolm Baldrige National Quality Award, Senges the Learning Organization, Watkins and Marsicks approach to organizational learning, Coveys Seven Habits of Highly Successful People, and Demings Fourteen Points for Management. Some of these approaches (Deming and ISO 9000) are then compared to the DOEs official position on quality management and conduct of operations (DOE Orders 5700.6C and 5480.19). Using a tabular format, it is shown that while 5700.6C (Quality Assurance) maps well to many of the current approaches to TQM, DOEs principle guide to management Order 5419.80 (Conduct of Operations) has many significant conflicts with some of the modern approaches to continuous quality improvement. 13. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1994-03-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October - December 1993) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 14. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1992-11-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (July - September 1992) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 15. High Temperature Superconductors: From Delivery to Applications (Presentation from 2011 Ernest Orlando Lawrence Award-winner, Dr. Amit Goyal, and including introduction by Energy Secretary, Dr. Steven Chu) International Nuclear Information System (INIS) Goyal, Amit 2012-01-01 Dr. Amit Goyal, a high temperature superconductivity (HTS) researcher at Oak Ridge National Laboratory, was named a 2011 winner of the Department of Energy's Ernest Orlando Lawrence Award honoring U.S. scientists and engineers for exceptional contributions in research and development supporting DOE and its mission. Winner of the award in the inaugural category of Energy Science and Innovation, Dr. Goyal was cited for his work in 'pioneering research and transformative contributions to the field of applied high temperature superconductivity, including fundamental materials science advances and technical innovations enabling large-scale applications of these novel materials.' Following his basic research in grain-to-grain supercurrent transport, Dr. Goyal focused his energy in transitioning this fundamental understanding into cutting-edge technologies. Under OE sponsorship, Dr. Goyal co-invented the Rolling Assisted Bi-Axially Textured Substrate technology (RABiTS) that is used as a substrate for second generation HTS wires. OE support also led to the invention of Structural Single Crystal Faceted Fiber Substrate (SSIFFS) and the 3-D Self Assembly of Nanodot Columns. These inventions and associated R and D resulted in 7 R and D 100 Awards including the 2010 R and D Magazine's Innovator of the Year Award, 3 Federal Laboratory Consortium Excellence in Technology Transfer National Awards, a DOE Energy100 Award and many others. As a world authority on HTS materials, Dr. Goyal has presented OE-sponsored results in more than 150 invited talks, co-authored more than 350 papers and is a fellow of 7 professional societies. 16. Time resolved studies of dual emission and photoinduced energy transfer in a Tris methoxy coumarin derivative of a cryptand and its complex with Tb(NO3)3 International Nuclear Information System (INIS) Samanta, Subhodip; Roy, Maitrayee Basu; Ghosh, Sanjib 2006-01-01 The paper reports time resolved emission studies in different solvents of the dual emission observed in the macrotricyclic cryptand (L) where the three secondary amino nitrogen have been derivatized with methoxy coumarin at room temperature and at 77K. The emission from the 'locally excited monomer state' has a lifetime less than 1ns while the other emitting state is an exciplex state with a lifetime of 4-5ns depending on the solvent. The lifetime is found to increase significantly in the presence of protons and at 77K exhibiting photoinduced electron transfer (PET) in the system L. The system exhibits photoinduced energy transfer (ET) in its Tb(III) complex using NO 3 - ion as counteranion at room temperature as well as at 77K. The rate constants for energy transfer from coumarin moiety to Tb(III) have been evaluated at room temperature and at 77K following the decay of 5 D 4 -> 7 F 5 emission of Tb(III). The results indicate that energy transfer takes place from the lowest triplet state of coumarin moiety to Tb(III) by exchange mechanism. The energy transfer (ET) rate constants at room temperature and at 77K have been evaluated and interpreted using the geometry of L obtained by theoretical calculation 17. Energy partitioning in polyatomic chemical reactions: Quantum state resolved studies of highly exothermic atom abstraction reactions from molecules in the gas phase and at the gas-liquid interface Science.gov (United States) Zolot, Alexander M. This thesis recounts a series of experiments that interrogate the dynamics of elementary chemical reactions using quantum state resolved measurements of gas-phase products. The gas-phase reactions F + HCl → HF + Cl and F + H2O → HF + OH are studied using crossed supersonic jets under single collision conditions. Infrared (IR) laser absorption probes HF product with near shot-noise limited sensitivity and high resolution, capable of resolving rovibrational states and Doppler lineshapes. Both reactions yield inverted vibrational populations. For the HCl reaction, strongly bimodal rotational distributions are observed, suggesting microscopic branching of the reaction mechanism. Alternatively, such structure may result from a quantum-resonance mediated reaction similar to those found in the well-characterized F + HD system. For the H2O reaction, a small, but significant, branching into v = 2 is particularly remarkable because this manifold is accessible only via the additional center of mass collision energy in the crossed jets. Rotationally hyperthermal HF is also observed. Ab initio calculations of the transition state geometry suggest mechanisms for both rotational and vibrational excitation. Exothermic chemical reaction dynamics at the gas-liquid interface have been investigated by colliding a supersonic jet of F atoms with liquid squalane (C30H62), a low vapor pressure hydrocarbon compatible with the high vacuum environment. IR spectroscopy provides absolute HF( v,J) product densities and Doppler resolved velocity component distributions perpendicular to the surface normal. Compared to analogous gas-phase F + hydrocarbon reactions, the liquid surface is a more effective "heat sink," yet vibrationally excited populations reveal incomplete thermal accommodation with the surface. Non-Boltzmann J-state populations and hot Doppler lineshapes that broaden with HF excitation indicate two competing scattering mechanisms: (i) a direct reactive scattering channel 18. Energy savings for heat-island reduction strategies in Chicago and Houston (including updates for Baton Rouge, Sacramento, and Salt Lake City) Energy Technology Data Exchange (ETDEWEB) Konopacki, S.; Akbari, H. 2002-02-28 In 1997, the U.S. Environmental Protection Agency (EPA) established the ''Heat Island Reduction Initiative'' to quantify the potential benefits of Heat-Island Reduction (HIR) strategies (i.e., shade trees, reflective roofs, reflective pavements and urban vegetation) to reduce cooling-energy use in buildings, lower the ambient air temperature and improve urban air quality in cities, and reduce CO2 emissions from power plants. Under this initiative, the Urban Heat Island Pilot Project (UHIPP) was created with the objective of investigating the potential of HIR strategies in residential and commercial buildings in three initial UHIPP cities: Baton Rouge, LA; Sacramento, CA; and Salt Lake City, UT. Later two other cities, Chicago, IL and Houston, TX were added to the UHIPP. In an earlier report we summarized our efforts to calculate the annual energy savings, peak power avoidance, and annual CO2 reduction obtainable from the introduction of HIR strategies in the initial three cities. This report summarizes the results of our study for Chicago and Houston. In this analysis, we focused on three building types that offer the highest potential savings: single-family residence, office and retail store. Each building type was characterized in detail by vintage and system type (i.e., old and new building constructions, and gas and electric heat). We used the prototypical building characteristics developed earlier for each building type and simulated the impact of HIR strategies on building cooling- and heating-energy use and peak power demand using the DOE-2.1E model. Our simulations included the impact of (1) strategically-placed shade trees near buildings [direct effect], (2) use of high-albedo roofing material on the building [direct effect], (3) urban reforestation with high-albedo pavements and building surfaces [indirect effect] and (4) combined strategies 1, 2, and 3 [direct and indirect effects]. We then estimated the total roof area of air 19. Energy in greenhouses in the Netherlands. Developments in the sector and in the businesses up to and including 1994; Energie in de glastuinbouw van Nederland. Ontwikkelingen in de sector en op bedrijven t/m 1994 Energy Technology Data Exchange (ETDEWEB) Van der Velden, N.J.A.; Van der Sluis, B.J.; Verhaegh, A.P. 1996-02-01 An overview is given of energy efficient developments, CO{sub 2} emission and the degrees of penetration and applications of energy saving options in the glasshouse market gardening sector. The aims of the long-range agreement between the greenhouse businesses and the Dutch government (i.e. 50% energy efficiency must be realized within the period 1980-2000) are taken into account. Up to and including 1994 the energy efficiency has improved 38%. The CO{sub 2} emission improved from 113% to 108% compared to the emission level in 1989/1990. Further improvements and reduction can be realized by a better and higher use of energy saving options. It appears that there is a positive development in the application of condensers, climate computers, heat buffers, pure CO{sub 2}, shields, and cogeneration installations. The use of waste heat is the most important option: in 1994 the contribution of waste heat to the total energy consumption in the glasshouse sector increased by 6%. 5 figs., 5 ills., 32 tabs., 3 appendices, 24 refs. 20. Hierarchical Control Strategy of Heat and Power for Zero Energy Buildings including Hybrid Fuel Cell/Photovoltaic Power Sources and Plug-in Electric Vehicle DEFF Research Database (Denmark) 2016-01-01 This paper presents a hierarchical control strategy for heat and electric power control of a building integrating hybrid renewable power sources including photovoltaic, fuel cell and battery energy storage with Plug-in Electric Vehicles (PEV) in smart distribution systems. Because...... complexities and uncertainties in this kind of hybrid system, a hybrid supervisory control with an adaptive fuzzy sliding power control strategy is proposed to regulate the amount of requested fuel from a fuel cell power source to produce the electrical power and heat. Then, simulation results are used...... of the controllability of fuel cell power, this power sources plays the main role for providing heat and electric power to zero emission buildings. First, the power flow structure between hybrid power resources is described. To do so, all necessary electrical and thermal equations are investigated. Next, due to the many... 1. Enrico Fermi Awards Ceremony for Dr. Allen J. Bard and Dr. Andrew Sessler, February 2014 (Presentations, including remarks by Energy Secretary, Dr. Ernest Moniz) Energy Technology Data Exchange (ETDEWEB) Moniz, Ernest [U.S. Energy Secretary 2014-02-03 The Fermi Award is a Presidential award and is one of the oldest and most prestigious science and technology honors bestowed by the U.S. Government. On February 3, 2014 it was conferred upon two exceptional scientists. The first to be recognized is Dr. Allen J. Bard, 'for international leadership in electrochemical science and technology, for advances in photoelectrochemistry and photocatalytic materials, processes, and devices, and for discovery and development of electrochemical methods including electrogenerated chemiluminescence and scanning electrochemical microscopy.' The other honoree is Dr. Andrew Sessler, 'for advancing accelerators as powerful tools of scientific discovery, for visionary direction of the research enterprise focused on challenges in energy and the environment, and for championing outreach and freedom of scientific inquiry worldwide.' Dr. Patricia Dehmer opened the ceremony, and Dr. Ernest Moniz presented the awards. 2. The Surface Energy Budget and Precipitation Efficiency for Convective Systems During TOGA, COARE, GATE, SCSMEX and ARM: Cloud-Resolving Model Simulations Science.gov (United States) Tao, W.-K.; Shie, C.-L.; Johnson, D; Simpson, J.; Starr, David OC. (Technical Monitor) 2002-01-01 A two-dimensional version of the Goddard Cumulus Ensemble (GCE) Model is used to simulate convective systems that developed in various geographic locations. Observed large-scale advective tendencies for potential temperature, water vapor mixing ratio, and horizontal momentum derived from field campaigns are used as the main forcing. By examining the surface energy budgets, the model results show that the two largest terms are net condensation (heating/drying) and imposed large-scale forcing (cooling/moistening) for tropical oceanic cases. These two terms arc opposite in sign, however. The contributions by net radiation and latent heat flux to the net condensation vary in these tropical cases, however. For cloud systems that developed over the South China Sea and eastern Atlantic, net radiation (cooling) accounts for about 20% or more of the net condensation. However, short-wave heating and long-wave cooling are in balance with each other for cloud systems over the West Pacific region such that the net radiation is very small. This is due to the thick anvil clouds simulated in the cloud systems over the Pacific region. Large-scale cooling exceeds large-scale moistening in the Pacific and Atlantic cases. For cloud systems over the South China Sea, however, there is more large-scale moistening than cooling even though the cloud systems developed in a very moist environment. though For three cloud systems that developed over a mid-latitude continent, the net radiation and sensible and latent heat fluxes play a much more important role. This means the accurate measurement of surface fluxes and radiation is crucial for simulating these mid-latitude cases. 3. Resolved resonance parameters for 236Np International Nuclear Information System (INIS) Morogovskij, G.B.; Bakhanovich, L.A. 2002-01-01 Multilevel Breit-Wigner parameters were obtained for fission cross-section representation in the 0.01-33 eV energy region from evaluation of a 236 Np experimental fission cross-section in the resolved resonance region. (author) 4. Direct angle resolved photoemission spectroscopy and ... Since 1997 we systematically perform direct angle resolved photoemission spectroscopy (ARPES) on in-situ grown thin (< 30 nm) cuprate films. Specifically, we probe low-energy electronic structure and properties of high-c superconductors (HTSC) under different degrees of epitaxial (compressive vs. tensile) strain. 5. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1993-03-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October--December 1992) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 6. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1991-02-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October--December 1990) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 7. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1992-08-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (April--June 1992) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 8. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1992-03-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October--December 1991) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 9. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1991-07-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (April-June 1991) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 10. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1993-12-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (July--September 1993) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 11. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1993-06-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (January--March 1993) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 12. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1992-05-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (January--March 1992) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 13. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1990-11-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (July--September 1990) and includes copies of letters, notices, and orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 14. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1991-11-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (July--September 1991) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 15. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1990-03-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October--December 1989) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 16. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1989-12-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (July--September 1989) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 17. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1991-05-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (January--March 1991) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 18. Enforcement actions: Significant actions resolved International Nuclear Information System (INIS) 1993-09-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (April--June 1993) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory Commission to licensees with respect to these enforcement actions. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, so that actions can be taken to improve safety by avoiding future violations similar to those described in this publication 19. A Preliminary Review of U.S. Forest Service Business Practices To Authorize Special Uses, Including Energy Infrastructure Projects, on National Forest System Lands Energy Technology Data Exchange (ETDEWEB) Wescott, K. L. [Argonne National Lab. (ANL), Argonne, IL (United States); May, J. E. [Argonne National Lab. (ANL), Argonne, IL (United States); Moore, H. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Brunner, D. L. [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-09-01 The U.S. Forest Service (USFS) Special Uses-Lands Program is in jeopardy. Although this program, authorized in Title 36, Part 251, of the U.S. Code of Federal Regulations (36 CFR Part 251), ranks among the top four revenue-generating programs for use of National Forest System (NFS) lands, along with the Timber, Minerals, and Special Uses-Recreation Programs, the Special Uses-Lands Program is in a state of neglect. Repeated cuts in funding (a decrease of 26% from fiscal years 2010 to 2014) are adversely affecting staffing and training, which in turn is affecting timely permit processing and ultimately the public’s ability to use and benefit from NFS lands. In addition, highly experienced staff with valuable institutional knowledge of the program have begun to retire. The ability of the program to function under these dire circumstances can be attributed to the dedication of Special Uses staff to the program and their commitment to the public. The initial focus of this report was to identify opportunities for improving performance of permitting and review for large energy infrastructure-related projects. However, it became clear during this analysis that these projects are generally adequately staffed and managed. This is due in large part to the availability of cost-recovery dollars and the high-profile nature of these projects. However, it also became apparent that larger issues affecting the bulk of the work of the Special Uses-Lands Program need to be addressed immediately. This report is a preliminary examination of the state of the Special Uses-Lands Program and focuses on a few key items requiring immediate attention. Further investigation through case studies is recommended to dig deeper into the Special Uses-Lands Program business process to determine the most costeffective strategies for streamlining the overall process and the metrics by which performance can be evaluated, including for the permitting and tracking of energy infrastructure projects. 20. Energy evolution of the moments of the hadron distribution in QCD jets including NNLL resummation and NLO running-coupling corrections CERN Document Server Perez-Ramos, Redamy 2014-01-01 The moments of the single inclusive momentum distribution of hadrons in QCD jets, are studied in the next-to-modified-leading-log approximation (NMLLA) including next-to-leading-order (NLO) corrections to the alpha_s strong coupling. The evolution equations are solved using a distorted Gaussian parametrisation, which successfully reproduces the spectrum of charged hadrons of jets measured in e+e- collisions. The energy dependencies of the maximum peak, multiplicity, width, kurtosis and skewness of the jet hadron distribution are computed analytically. Comparisons of all the existing jet data measured in e+e- collisions in the range sqrt(s)~2-200 GeV to the NMLLA+NLO* predictions allow one to extract a value of the QCD parameter Lambda_QCD, and associated two-loop coupling constant at the Z resonance alpha_s(m_Z^2)= 0.1195 +/- 0.0022, in excellent numerical agreement with the current world average obtained using other methods. 1. Coronagraphic Planet Finding with Energy Resolving Detectors Data.gov (United States) National Aeronautics and Space Administration — We propose to build a 10,000 pixel MKID camera and integrate it with the Project 1640 coronagraph and the PALM-3000 adaptive optics system at the PaloMarch 200-inch... 2. Time-resolved spectroscopy in synchrotron radiation International Nuclear Information System (INIS) Rehn, V.; Stanford Univ., CA 1980-01-01 Synchrotron radiation (SR) from large-diameter storage rings has intrinsic time structure which facilitates time-resolved measurements form milliseconds to picoseconds and possibly below. The scientific importance of time-resolved measurements is steadily increasing as more and better techniques are discovered and applied to a wider variety of scientific problems. This paper presents a discussion of the importance of various parameters of the SR facility in providing for time-resolved spectroscopy experiments, including the role of beam-line optical design parameters. Special emphasis is placed on the requirements of extremely fast time-resolved experiments with which the effects of atomic vibrational or relaxation motion may be studied. Before discussing the state-of-the-art timing experiments, we review several types of time-resolved measurements which have now become routine: nanosecond-range fluorescence decay times, time-resolved emission and excitation spectroscopies, and various time-of-flight applications. These techniques all depend on a short SR pulse length and a long interpulse period, such as is provided by a large-diameter ring operating in a single-bunch mode. In most cases, the pulse shape and even the stability of the pulse shape is relatively unimportant as long as the pulse length is smaller than the risetime of the detection apparatus, typically 1 to 2 ns. For time resolution smaller than 1 ns, the requirements on the pulse shape become more stringent. (orig./FKS) 3. Highly resolving computerized tomography International Nuclear Information System (INIS) Kurtz, B.; Petersen, D.; Walter, E. 1984-01-01 With the development of highly-resolving devices for computerized tomography, CT diagnosis of the lumbar vertebral column has gained increasing importance. As an ambulatory, non-invasive method it has proved in comparative studies to be at least equivalent to myelography in the detection of dislocations of inter-vertebral disks (4,6,7,15). Because with modern devices not alone the bones, but especially the spinal soft part structures are clearly and precisely presented with a resolution of distinctly below 1 mm, a further improvement of the results is expected as experience will increase. The authors report on the diagnosis of the lumbar vertebral column with the aid of a modern device for computerized tomography and wish to draw particular attention to the possibility of doing this investigation as a routine, and to the diagnostic value of secondary reconstructions. (BWU) [de 4. An overview of wind energy, taking into consideration several imporatn issues, including an analisys of regulatory requirements for the connection of wind generation into the power system OpenAIRE Gimenez Alvarez, Juan Manuel; SCHWEICKARDT, GUSTAVO; GÓMEZ TARGARONA, JUAN CARLOS 2012-01-01 Pollution problems such as greenhouse effect as well as the high value and volatility of fuel prices have forced and accelerated the development and use of renewable energy sources. In this work a complete revision of wind generation is presented. In the first part a brief history of the wind energy developments is detailed. Next, some commentaries related to the present and future state are made. Then, a revision of the modern structures of wind generation is realized. In fourth place it is ... 5. Resolving the Circumgalactic Medium in the NEPHTHYS Simulations Science.gov (United States) Richardson, Mark Lawrence Albert; Devriendt, Julien; Slyz, Adrianne; Rosdahl, Karl Joakim; Kimm, Taysun 2018-01-01 NEPHTHYS is a RAMSES Cosmological-zoom galaxy simulation suite investigating the impact of stellar feedback (winds, radiation, and type Ia and II SNe) on z > 1 ~L* galaxies and their environments. NEPHTHYS has ~10 pc resolution in the galaxy, where the scales driving star formation and the interaction of stellar feedback with the ISM can begin to be resolved. As outflows, winds, and radiation permeate through the circumgalactic medium (CGM) they can heat or cool gas, and deposit metals throughout the CGM. Such material in the CGM is seen by spectroscopic studies of distant quasars, where CGM gas of foreground galaxies is observed in absorption. It is still unclear what the origin and evolution of this gas is. To help answer this, NEPHTHYS includes additional refinement in the CGM, refining it to an unrivaled 80 pc resolution. I will discuss how this extra resolution is crucial for resolving the complex structure of outflows and accretion in the CGM. Specifically, the metal mass and covering fraction of metals and high energy ions is increased, while the better resolved outflows leads to a decrease in the overall baryon content of galaxy halos, and individual outflow events can have larger velocities. Our results suggest that absorption observations of CGM are tracing a clumpy column of gas with multiple kinematic components. 6. Enrico Fermi Awards Ceremony for Dr. Mildred S. Dresselhaus and Dr. Burton Richter, May 2012 (Presentations, including remarks by Energy Secretary, Dr. Steven Chu) International Nuclear Information System (INIS) Chu, Steven 2012-01-01 The Fermi Award is a Presidential award and is one of the oldest and most prestigious science and technology honors bestowed by the U.S. Government. On May 7, 2012 it was conferred upon two exceptional scientists: Dr. Mildred Dresselhaus, 'for her scientific leadership, her major contributions to science and energy policy, her selfless work in science education and the advancement of diversity in the scientific workplace, and her highly original and impactful research,' and Dr. Burton Richter, 'for the breadth of his influence in the multiple disciplines of accelerator physics and particle physics, his profound scientific discoveries, his visionary leadership as SLAC Director, his leadership of science, and his notable contributions in energy and public policy.' Dr. John Holder, Director of the White House Office of Science and Technology Policy, opened the ceremony, and Dr. Bill Brinkman, Director of DOE's Office of Science introduced the main speaker, Dr. Steven Chu, U.S. Energy Secretary. 7. Including the temporal change in PM{sub 2.5} concentration in the assessment of human health impact: Illustration with renewable energy scenarios to 2050 Energy Technology Data Exchange (ETDEWEB) Gschwind, Benoit, E-mail: benoit.gschwind@mines-paristech.fr [Centre Observation, Impacts, Energy, MINES ParisTech, 1 rue Claude Daunesse, CS 10207, F-06904 Sophia Antipolis (France); Lefevre, Mireille, E-mail: mireille.lefevre@mines-paristech.fr [Centre Observation, Impacts, Energy, MINES ParisTech, 1 rue Claude Daunesse, CS 10207, F-06904 Sophia Antipolis (France); Blanc, Isabelle, E-mail: isabelle.blanc@mines-paristech.fr [Centre Observation, Impacts, Energy, MINES ParisTech, 1 rue Claude Daunesse, CS 10207, F-06904 Sophia Antipolis (France); Ranchin, Thierry, E-mail: thierry.ranchin@mines-paristech.fr [Centre Observation, Impacts, Energy, MINES ParisTech, 1 rue Claude Daunesse, CS 10207, F-06904 Sophia Antipolis (France); Wyrwa, Artur, E-mail: awyrwa@agh.edu.pl [AGH University of Science and Technology, Al. Mickiewicza 30, Krakow 30-059 (Poland); Drebszok, Kamila [AGH University of Science and Technology, Al. Mickiewicza 30, Krakow 30-059 (Poland); Cofala, Janusz, E-mail: cofala@iiasa.ac.at [International Institute for Applied Systems Analysis, Schlossplatz 1, 2067 Laxenburg (Austria); Fuss, Sabine, E-mail: fuss@mcc-berlin.net [International Institute for Applied Systems Analysis, Schlossplatz 1, 2067 Laxenburg (Austria); Mercator Research Institute on Global Commons and Climate Change, Torgauer Str. 12-15, 10829 Berlin (Germany) 2015-04-15 This article proposes a new method to assess the health impact of populations exposed to fine particles (PM{sub 2.5}) during their whole lifetime, which is suitable for comparative analysis of energy scenarios. The method takes into account the variation of particle concentrations over time as well as the evolution of population cohorts. Its capabilities are demonstrated for two pathways of European energy system development up to 2050: the Baseline (BL) and the Low Carbon, Maximum Renewable Power (LC-MRP). These pathways were combined with three sets of assumptions about emission control measures: Current Legislation (CLE), Fixed Emission Factors (FEFs), and the Maximum Technically Feasible Reductions (MTFRs). Analysis was carried out for 45 European countries. Average PM{sub 2.5} concentration over Europe in the LC-MRP/CLE scenario is reduced by 58% compared with the BL/FEF case. Health impacts (expressed in days of loss of life expectancy) decrease by 21%. For the LC-MRP/MTFR scenario the average PM{sub 2.5} concentration is reduced by 85% and the health impact by 34%. The methodology was developed within the framework of the EU's FP7 EnerGEO project and was implemented in the Platform of Integrated Assessment (PIA). The Platform enables performing health impact assessments for various energy scenarios. - Highlights: • A new method to assess health impact of PM{sub 2.5} for energy scenarios is proposed. • An algorithm to compute Loss of Life Expectancy attributable to exposure to PM{sub 2.5} is depicted. • Its capabilities are demonstrated for two pathways of European energy system development up to 2050. • Integrating the temporal evolution of PM{sub 2.5} is of great interest for assessing the potential impacts of energy scenarios. 8. Including the temporal change in PM2.5 concentration in the assessment of human health impact: Illustration with renewable energy scenarios to 2050 International Nuclear Information System (INIS) Gschwind, Benoit; Lefevre, Mireille; Blanc, Isabelle; Ranchin, Thierry; Wyrwa, Artur; Drebszok, Kamila; Cofala, Janusz; Fuss, Sabine 2015-01-01 This article proposes a new method to assess the health impact of populations exposed to fine particles (PM 2.5 ) during their whole lifetime, which is suitable for comparative analysis of energy scenarios. The method takes into account the variation of particle concentrations over time as well as the evolution of population cohorts. Its capabilities are demonstrated for two pathways of European energy system development up to 2050: the Baseline (BL) and the Low Carbon, Maximum Renewable Power (LC-MRP). These pathways were combined with three sets of assumptions about emission control measures: Current Legislation (CLE), Fixed Emission Factors (FEFs), and the Maximum Technically Feasible Reductions (MTFRs). Analysis was carried out for 45 European countries. Average PM 2.5 concentration over Europe in the LC-MRP/CLE scenario is reduced by 58% compared with the BL/FEF case. Health impacts (expressed in days of loss of life expectancy) decrease by 21%. For the LC-MRP/MTFR scenario the average PM 2.5 concentration is reduced by 85% and the health impact by 34%. The methodology was developed within the framework of the EU's FP7 EnerGEO project and was implemented in the Platform of Integrated Assessment (PIA). The Platform enables performing health impact assessments for various energy scenarios. - Highlights: • A new method to assess health impact of PM 2.5 for energy scenarios is proposed. • An algorithm to compute Loss of Life Expectancy attributable to exposure to PM 2.5 is depicted. • Its capabilities are demonstrated for two pathways of European energy system development up to 2050. • Integrating the temporal evolution of PM 2.5 is of great interest for assessing the potential impacts of energy scenarios 9. Changes in body weight, blood pressure and selected metabolic biomarkers with an energy-restricted diet including twice daily sweet snacks and once daily sugar-free beverage OpenAIRE Nickols-Richardson, Sharon M.; Piehowski, Kathryn E.; Metzgar, Catherine J.; Miller, Debra L.; Preston, Amy G. 2014-01-01 BACKGROUND/OBJECTIVES The type of sweet snack incorporated into an energy-restricted diet (ERD) may produce differential effects on metabolic improvements associated with body weight (BW) loss. This study compared effects of incorporating either twice daily energy-controlled dark chocolate snacks plus once daily sugar-free cocoa beverage (DC) to non-chocolate snacks plus sugar-free non-cocoa beverage (NC) into an ERD on BW loss and metabolic outcomes. MATERIALS/METHODS In an 18-week randomize... 10. Resolving inventory differences International Nuclear Information System (INIS) Weber, J.H.; Clark, J.P. 1991-01-01 Determining the cause of an inventory difference (ID) that exceeds warning or alarm limits should not only involve investigation into measurement methods and reexamination of the model assumptions used in the calculation of the limits, but also result in corrective actions that improve the quality of the accountability measurements. An example illustrating methods used by Savannah River Site (SRS) personnel to resolve an ID is presented that may be useful to other facilities faced with a similar problem. After first determining that no theft or diversion of material occurred and correcting any accountability calculation errors, investigation into the IDs focused on volume and analytical measurements, limit of error of inventory difference (LEID) modeling assumptions, and changes in the measurement procedures and methods prior to the alarm. There had been a gradual gain trend in IDs prior to the alarm which was reversed by the alarm inventory. The majority of the NM in the facility was stored in four large tanks which helped identify causes for the alarm. The investigation, while indicating no diversion or theft, resulted in changes in the analytical method and in improvements in the measurement and accountability that produced a 67% improvement in the LEID 11. A spectral pyrometer to spatially resolve the blackbody temperature of a warm dense plasma Science.gov (United States) Coleman, J. E. 2016-12-01 A pyrometer has been developed to spatially resolve the blackbody temperature of a radiatively cooling warm dense plasma. The pyrometer is composed of a lens coupled fiber array, Czerny-Turner visible spectrometer, and an intensified gated CCD for the detector. The radiatively cooling warm dense plasma is generated by a ˜100-ns-long intense relativistic electron bunch with an energy of 19.1 MeV and a current of 0.2 kA interacting with 100-μm-thick low-Z foils. The continuum spectrum is measured over 250 nm with a low groove density grating. These plasmas emit visible light or blackbody radiation on relatively long time scales (˜0.1 to 100 μs). The diagnostic layout, calibration, and proof-of-principle measurement of a radiatively cooling aluminum plasma is presented, which includes a spatially resolved temperature gradient and the ability to temporally resolve it also. 12. Performance of the Time Resolved Spectrometer for the 5 MeV Photo-Injector PHIN CERN Document Server Olvegaard, M; Mete, O; Csatari, M; Dabrowski, A; Dobert, S; Lefevre, T; Petrarca, M 2011-01-01 The PHIN photo-injector test facility is being commissioned at CERN to demonstrate the capability to produce the required beam for the 3rd CLIC Test Facility (CTF3), which includes the production of a 3.5A stable beam, bunched at 1.5 GHz with a relative energy spread of less than 1%. A 90◦ spectrometer is instrumented with an OTR screen coupled to a gated intensified camera, followed by a segmented beam dump for time resolved energy measurements. The following paper describes the transverse and temporal resolution of the instrumentation with an outlook towards single-bunch energy measurements. 13. Characterization of a quadrant diamond transmission X-ray detector including a precise determination of the mean electron-hole pair creation energy. Science.gov (United States) Keister, Jeffrey W; Cibik, Levent; Schreiber, Swenja; Krumrey, Michael 2018-03-01 Precise monitoring of the incoming photon flux is crucial for many experiments using synchrotron radiation. For photon energies above a few keV, thin semiconductor photodiodes can be operated in transmission for this purpose. Diamond is a particularly attractive material as a result of its low absorption. The responsivity of a state-of-the art diamond quadrant transmission detector has been determined, with relative uncertainties below 1% by direct calibration against an electrical substitution radiometer. From these data and the measured transmittance, the thickness of the involved layers as well as the mean electron-hole pair creation energy were determined, the latter with an unprecedented relative uncertainty of 1%. The linearity and X-ray scattering properties of the device are also described. 14. Renner-Teller effect in linear tetra-atomic molecules. I. Variational method including couplings between all degrees of freedom on six-dimensional potential energy surfaces Science.gov (United States) Jutier, L.; Léonard, C.; Gatti, F. 2009-04-01 For electronically degenerate states of linear tetra-atomic molecules, a new method is developed for the variational treatment of the Renner-Teller and spin-orbit couplings. The approach takes into account all rotational and vibrational degrees of freedom, the dominant couplings between the corresponding angular momenta as well as the couplings with the electronic and electron spin angular momenta. The complete rovibrational kinetic energy operator is expressed in Jacobi coordinates, where the rovibrational angular momenta ĴN have been replaced by L̂ez-Ŝ and the spin-orbit coupling has been described by the perturbative term ASO×L̂ezṡŜz. Attention has been paid on the electronic wave functions, which require additional phase for linear tetra-atomic molecules. Our implemented rovibrational basis functions and the integration of the different parts of the total Hamiltonian operator are described. This new variational approach is tested on the electronic ground state X Π2u of HCCH+ for which new six-dimensional potential energy surfaces have been computed using the internally contracted multireference configuration interaction method and the cc-pV5Z basis set. The calculated rovibronic energies and their comparisons with previous theoretical and experimental works are presented in the next paper. 15. Enrico Fermi Awards Ceremony for Dr. Mildred S. Dresselhaus and Dr. Burton Richter, May 2012 (Presentations, including remarks by Energy Secretary, Dr. Steven Chu) Energy Technology Data Exchange (ETDEWEB) Chu, Steven (U.S. Energy Secretary) 2012-05-07 The Fermi Award is a Presidential award and is one of the oldest and most prestigious science and technology honors bestowed by the U.S. Government. On May 7, 2012 it was conferred upon two exceptional scientists: Dr. Mildred Dresselhaus, 'for her scientific leadership, her major contributions to science and energy policy, her selfless work in science education and the advancement of diversity in the scientific workplace, and her highly original and impactful research,' and Dr. Burton Richter, 'for the breadth of his influence in the multiple disciplines of accelerator physics and particle physics, his profound scientific discoveries, his visionary leadership as SLAC Director, his leadership of science, and his notable contributions in energy and public policy.' Dr. John Holder, Director of the White House Office of Science and Technology Policy, opened the ceremony, and Dr. Bill Brinkman, Director of DOE's Office of Science introduced the main speaker, Dr. Steven Chu, U.S. Energy Secretary. 16. Development of a Cloud Resolving Model for Heterogeneous Supercomputers Science.gov (United States) Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C. 2017-12-01 A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction. 17. Spatially resolved and time-resolved imaging of transport of indirect excitons in high magnetic fields Science.gov (United States) Dorow, C. J.; Hasling, M. W.; Calman, E. V.; Butov, L. V.; Wilkes, J.; Campman, K. L.; Gossard, A. C. 2017-06-01 We present the direct measurements of magnetoexciton transport. Excitons give the opportunity to realize the high magnetic-field regime for composite bosons with magnetic fields of a few tesla. Long lifetimes of indirect excitons allow the study of kinetics of magnetoexciton transport with time-resolved optical imaging of exciton photoluminescence. We performed spatially, spectrally, and time-resolved optical imaging of transport of indirect excitons in high magnetic fields. We observed that an increasing magnetic field slows down magnetoexciton transport. The time-resolved measurements of the magnetoexciton transport distance allowed for an experimental estimation of the magnetoexciton diffusion coefficient. An enhancement of the exciton photoluminescence energy at the laser excitation spot was found to anticorrelate with the exciton transport distance. A theoretical model of indirect magnetoexciton transport is presented and is in agreement with the experimental data. 18. Photon number projection using non-number-resolving detectors International Nuclear Information System (INIS) Rohde, Peter P; Webb, James G; Huntington, Elanor H; Ralph, Timothy C 2007-01-01 Number-resolving photo-detection is necessary for many quantum optics experiments, especially in the application of entangled state preparation. Several schemes have been proposed for approximating number-resolving photo-detection using non-number-resolving detectors. Such techniques include multi-port detection and time-division multiplexing. We provide a detailed analysis and comparison of different number-resolving detection schemes, with a view to creating a useful reference for experimentalists. We show that the ideal architecture for projective measurements is a function of the detector's dark count and efficiency parameters. We also describe a process for selecting an appropriate topology given actual experimental component parameters Energy Technology Data Exchange (ETDEWEB) Walker, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kruizenga, Alan Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Withey, Elizabeth Ann [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2017-08-01 The supercritical carbon dioxide (S-CO2) Brayton Cycle has gained significant attention in the last decade as an advanced power cycle capable of achieving high efficiency power conversion. Sandia National Laboratories, with support from the U.S. Department of Energy Office of Nuclear Energy (US DOE-NE), has been conducting research and development in order to deliver a technology that is ready for commercialization. Root cause analysis has been performed on the Recompression Loop at Sandia National Laboratories. It was found that particles throughout the loop are stainless steel, likely alloy 316 based upon the elemental composition. Deployment of a filter scheme is underway to both protect the turbomachinery and also for purposes of determining the specific cause for the particulate. Shake down tests of electric resistance (ER) as a potential in-situ monitoring scheme shows promise in high temperature systems. A modified instrument was purchased and held at 650°C for more than 1.5 months to date without issue. Quantitative measurements of this instrument will be benchmarked against witness samples in the future, but all qualitative trends to date are as to be expected. ER is a robust method for corrosion monitoring, but very slow at responding and can take several weeks under conditions to see obvious changes in behavior. Electrochemical noise was identified as an advanced technique that should be pursued for the ability to identify transients that would lead to poor material performance. 20. Study Modules for Calculus-Based General Physics. [Includes Modules 8-10: Conservation of Energy; Impulse and Momentum; and Rotational Motion]. Science.gov (United States) Fuller, Robert G., Ed.; And Others This is part of a series of 42 Calculus Based Physics (CBP) modules totaling about 1,000 pages. The modules include study guides, practice tests, and mastery tests for a full-year individualized course in calculus-based physics based on the Personalized System of Instruction (PSI). The units are not intended to be used without outside materials;… 1. Energy International Nuclear Information System (INIS) Meister, F.; Ott, F. 2002-01-01 This chapter gives an overview of the current energy economy in Austria. The Austrian political aims of sustainable development and climate protection imply a reorientation of the Austrian energy policy as a whole. Energy consumption trends (1993-1998), final energy consumption by energy carrier (indexed data 1993-1999), comparative analysis of useful energy demand (1993 and 1999) and final energy consumption of renewable energy sources by sector (1996-1999) in Austria are given. The necessary measures to be taken in order to reduce the energy demand and increased the use of renewable energy are briefly mentioned. Figs. 5. (nevyjel) 2. Resolving Lifshitz Horizons Energy Technology Data Exchange (ETDEWEB) Harrison, Sarah; Kachru, Shamit; Wang, Huajia; /Stanford U., ITP /Stanford U., Phys. Dept. /SLAC 2012-04-24 Via the AdS/CFT correspondence, ground states of field theories at finite charge density are mapped to extremal black brane solutions. Studies of simple gravity + matter systems in this context have uncovered wide new classes of extremal geometries. The Lifshitz metrics characterizing field theories with non-trivial dynamical critical exponent z {ne} 1 emerge as one common endpoint in doped holographic toy models. However, the Lifshitz horizon exhibits mildly singular behaviour - while curvature invariants are finite, there are diverging tidal forces. Here we show that in some of the simplest contexts where Lifshitz metrics emerge, Einstein-Maxwell-dilaton theories, generic corrections lead to a replacement of the Lifshitz metric, in the deep infrared, by a re-emergent AdS{sub 2} x R{sup 2} geometry. Thus, at least in these cases, the Lifshitz scaling characterizes the physics over a wide range of energy scales, but the mild singularity is cured by quantum or stringy effects. 3. Energy International Nuclear Information System (INIS) Meister, F. 2001-01-01 This chapter of the environmental control report deals with the environmental impact of energy production, energy conversion, atomic energy and renewable energy. The development of the energy consumption in Austria for the years 1993 to 1999 is given for the different energy types. The development of the use of renewable energy sources in Austria is given, different domestic heat-systems are compared, life cycles and environmental balance are outlined. (a.n.) 4. From individuals to populations to communities: a dynamic energy budget model of marine ecosystem size-spectrum including life history diversity. Science.gov (United States) Maury, Olivier; Poggiale, Jean-Christophe 2013-05-07 Individual metabolism, predator-prey relationships, and the role of biodiversity are major factors underlying the dynamics of food webs and their response to environmental variability. Despite their crucial, complementary and interacting influences, they are usually not considered simultaneously in current marine ecosystem models. In an attempt to fill this gap and determine if these factors and their interaction are sufficient to allow realistic community structure and dynamics to emerge, we formulate a mathematical model of the size-structured dynamics of marine communities which integrates mechanistically individual, population and community levels. The model represents the transfer of energy generated in both time and size by an infinite number of interacting fish species spanning from very small to very large species. It is based on standard individual level assumptions of the Dynamic Energy Budget theory (DEB) as well as important ecological processes such as opportunistic size-based predation and competition for food. Resting on the inter-specific body-size scaling relationships of the DEB theory, the diversity of life-history traits (i.e. biodiversity) is explicitly integrated. The stationary solutions of the model as well as the transient solutions arising when environmental signals (e.g. variability of primary production and temperature) propagate through the ecosystem are studied using numerical simulations. It is shown that in the absence of density-dependent feedback processes, the model exhibits unstable oscillations. Density-dependent schooling probability and schooling-dependent predatory and disease mortalities are proposed to be important stabilizing factors allowing stationary solutions to be reached. At the community level, the shape and slope of the obtained quasi-linear stationary spectrum matches well with empirical studies. When oscillations of primary production are simulated, the model predicts that the variability propagates along the 5. On the resolvents methods in quantum perturbation calculations International Nuclear Information System (INIS) Burzynski, A. 1979-01-01 This paper gives a systematic review of resolvent methods in quantum perturbation calculations. The case of discrete spectrum of hamiltonian is considered specially (in the literature this is the fewest considered case). The topics of calculations of quantum transitions by using of the resolvent formalism, quantum transitions between states from particular subspaces, the shifts of energy levels, are shown. The main ideas of stationary perturbation theory developed by Lippmann and Schwinger are considered too. (author) 6. Time-resolved terahertz spectroscopy of semiconductor nanostructures DEFF Research Database (Denmark) Porte, Henrik This thesis describes time-resolved terahertz spectroscopy measurements on various semiconductor nanostructures. The aim is to study the carrier dynamics in these nanostructures on a picosecond timescale. In a typical experiment carriers are excited with a visible or near-infrared pulse...... be signicantly reduced. Besides time-resolved terahertz spectroscopy measurement, optical transmission, Raman spectroscopy, scanning electron microscope, energy dispersive X-ray, and X-ray diffraction spectroscopy experiments on black silicon are presented.... 7. Evidence for the blue 10 pi S62+ dication in solutions of S8(AsF6)2: a computational study including solvation energies. Science.gov (United States) Krossing, Ingo; Passmore, Jack 2004-02-09 The energetics of dissociation reactions of S(8)(2+) into stoichiometric mixtures of S(n)(+), n = 2-7, and S(m)(2+), m = 3, 4, 6, 10, were investigated by the B3PW91 method [6-311+G(3df)//6-311+G] in the gas phase and in solution, with solvation energies calculated using the SCIPCM model and in some cases also the COSMO model [B3PW91/6-311+G*, dielectric constants 2-30, 83, 110]. UV-vis spectra of all species were calculated at the CIS/6-311G(2df) level and for S(4)(2+) and S(6)(2+) also at the TD-DFT level (BP86/SV(P)). Standard enthalpies of formation at 298 K were derived for S(3)(2+) (2538 kJ/mol), S(6)(2+) (2238 kJ/mol), and S(10)(2+) (2146 kJ/mol). A comparison of the observed and calculated UV-vis spectra based on our calculated thermochemical data in solution suggests that, in the absence of traces of facilitating agent (such as dibromine Br(2)), S(8)(2+) dissociates in dilute SO(2) solution giving an equilibrium mixture of ca. 0.5S(6)(2+) and S(5)(+) (K approximately 8.0) while in the more polar HSO(3)F some S(8)(2+) remains (K approximately 0.4). According to our calculations, the blue color of this solution is likely due to the pi-pi transition of the previously unknown 10 pi S(6)(2+) dication, and the previously assigned S(5)(+) is a less important contributor. Although not strictly planar, S(6)(2+) may be viewed as a 10 pi electron Hückel-aromatic ring containing a thermodynamically stable 3p(pi)-3p(pi) bond [d(S-S) = 2.028 A; tau(S-S-S-S) = 47.6 degrees ]. The computations imply that the new radical cation S(4)(+) may be present in sulfur dioxide solutions given on reaction of sulfur oxidized by AsF(5) in the presence of a facilitating agent. The standard enthalpy of formation of S(6)(AsF(6))(2)(s) was estimated as -3103 kJ/mol, and the disproportionation enthalpy of 2S(6)(AsF(6))(2)(s) to S(8)(AsF(6))(2)(s) and S(4)(AsF(6))(2)(s) as exothermic by 6-17 kJ/mol. The final preference of the observed disproportionation products is due to the inclusion of 8. 1998 Annual Study Report. Surveys on seeds for global environmental technologies, including those for energy saving; 1998 nendo chosa hokokusho. Sho energy nado chikyu kankyo taisaku gijutsu no seeds ni kansuru chosa Energy Technology Data Exchange (ETDEWEB) NONE 1999-03-01 The energy-saving and other global environmental technologies are surveyed by collecting relevant information from various institutes, both abroad and domestic, to contribute to development of ceramic gas turbines. USA has announced a climate change plan, based on the five principles, to promote utilization of high-efficiency technologies and development of new clean technologies. UK is promoting to improve energy efficiency, along with liberalization of its energy markets. Germany concentrates its efforts in the 'Program for Energy Research and Energy Technologies.' France places emphasis on prevention of air pollution and rational use of energy. The R and D trends at public institutes, e.g., universities, for global environmental technologies are surveyed, from which a total of 14 themes are extracted as the seed technologies. At the same time, a total of 9 techniques potentially applicable to the seeds are extracted by mainly reviewing JICST and patent information, and assessed. The R&D trends of the IPCC-related researchers are also surveyed, but provide no theme directly applicable to the seeds. Most of the related themes at the private and public institutes surveyed, both domestic and abroad, are concentrated on carbon dioxide. (NEDO) 9. Including solar energy in the local heat supply of the Goettingen city works; Einbindung von Sonnenenergie in die Nahwaermeversorgung der Stadtwerke Goettingen AG Energy Technology Data Exchange (ETDEWEB) Tepe, R. [ISFH - Institut fuer Solarenergieforschung Hameln-Emmerthal GmbH, Emmerthal (Germany); Schreitmueller, K.R. [ISFH - Institut fuer Solarenergieforschung Hameln-Emmerthal GmbH, Emmerthal (Germany); Vanoli, K. [ISFH - Institut fuer Solarenergieforschung Hameln-Emmerthal GmbH, Emmerthal (Germany) 1996-11-01 The research project Solar local heat Goettingen was started in 1992 in which, by including a 785 m{sup 2} flat collector plant in the return of the local heat supply of the Goettingen City Works; the potential of the combined system of solar plant and conventional heat supply system is to be proved. The size of the collector plant and inclusion in an existing local heat network promised an advantageous combination due to appreciably lower investment costs (lower collector installation costs) and savings in system technique, reduced operating costs, and higher income due to favourable operating conditions with even low collector operating temperatures and reduced piping losses. In parallel with this system, the Goettingen City Works installed an air collector plant which is used to preheat the combustion air taken to the conventional burners. (orig./HW) [Deutsch] Es entstand im Jahr 1992 das Forschungsvorhaben Solare Nahwaerme Goettingen, in dem durch die Einbindung einer 785 m{sup 2} grossen Flachkollektoranlage in den Ruecklauf der Nahwaermeversorgung der Stadtwerke Goettingen AG das Potential der Systemkombination Solaranlage und konventionelle Waermeversorgungssystem nachgewiesen werden sollte. Die Groesse der Kollektoranlage sowie die Einbindung in ein bestehendes Nahwaermenetz versprachen eine vorteilhafte Kombination aufgrund - deutlich geringerer Investionskosten (geringe Kollektorinstallationskosten sowie Einsparungen bei der Systemtechnik); - reduzierter Betriebskosten; - hoher Ertraege durch guenstige Betriebsbedingungen wie gleichbleibend niedriger Kollektorbetriebstemperaturen und reduzierter Leitungsverluste. Parallel zu diesem System installierten die Stadtwerke Goettingen AG eine Luftkollektoranlage, die der Vorwaermung der den konventionellen Brennern zugefuehrten Verbrennungsluft dient. (orig./HW) 10. Angle-resolved photoemission extended fine structure International Nuclear Information System (INIS) Barton, J.J. 1985-03-01 Measurements of the Angle-Resolved Photoemission Extended Fine Structure (ARPEFS) from the S(1s) core level of a c(2 x 2)S/Ni(001) are analyzed to determine the spacing between the S overlayer and the first and second Ni layers. ARPEFS is a type of photoelectron diffraction measurement in which the photoelectron kinetic energy is swept typically from 100 to 600 eV. By using this wide range of intermediate energies we add high precision and theoretical simplification to the advantages of the photoelectron diffraction technique for determining surface structures. We report developments in the theory of photoelectron scattering in the intermediate energy range, measurement of the experimental photoemission spectra, their reduction to ARPEFS, and the surface structure determination from the ARPEFS by combined Fourier and multiple-scattering analyses. 202 refs., 67 figs., 2 tabs 11. Time-resolved studies. Ch. 9 International Nuclear Information System (INIS) Mills, Dennis M.; Argonne National Lab., IL 1991-01-01 Synchrotron radiation, with its unique properties, offers a tool to extend X-ray measurements from the static to the time-resolved regime. The most straight-forward application of synchrotron radiation to the study of transient phenomena is directly through the possibility of decreased data-collection times via the enormous increase in flux over that of a laboratory X-ray system. Even further increases in intensity can be obtained through the use of novel X-ray optical devices. Wide-bandpass monochromators, e.g., that utilize the continuous spectral distribution of synchrotron radiation, can increase flux on the sample several orders of magnitude over conventional X-ray optical systems thereby allowing a further shortening of the data-collection time. Another approach that uses the continuous spectral nature of synchrotron radiation to decrease data-collection times is the 'parallel data collection' method. Using this technique, intensities as a function of X-ray energy are recorded simultaneously for all energies rather than sequentially recording data at each energy, allowing for a dramatic decrease in data-collection time. Perhaps the most exciting advances in time-resolved X-ray studies will be made by those methods that exploit the pulsed nature of the radiation emitted from storage rings. Pulsed techniques have had an enormous impact in the study of the temporal evolution of transient phenomena. The extension from continuous to modulated sources for use in time-resolved work has been carried over in a host of fields that use both pulsed particle and pulsed electro-magnetic beams. In this chapter the new experimental techniques are reviewed and illustrated with some experiments. (author). 98 refs.; 20 figs.; 5 tabs 12. High resolving power spectrometer for beam analysis Science.gov (United States) Moshammer, H. W.; Spencer, J. E. 1992-03-01 We describe a system designed to analyze the high energy, closely spaced bunches from individual RF pulses. Neither a large solid angle nor momentum range is required so this allows characteristics that appear useful for other applications such as ion beam lithography. The spectrometer is a compact, double-focusing QBQ design whose symmetry allows the Quads to range between F or D with a correspondingly large range of magnifications, dispersion, and resolving power. This flexibility insures the possibility of spatially separating all of the bunches along the focal plane with minimal transverse kicks and bending angle for differing input conditions. The symmetry of the system allows a simple geometric interpretation of the resolving power in terms of thin lenses and ray optics. We discuss the optics and the hardware that is proposed to measure emittance, energy, energy spread, and bunch length for each bunch in an RF pulse train for small bunch separations. We also discuss how to use such measurements for feedback and feedforward control of these bunch characteristics as well as maintain their stability. 13. WFIRST: Resolving the Milky Way Galaxy Science.gov (United States) Kalirai, Jason; Conroy, Charlie; Dressler, Alan; Geha, Marla; Levesque, Emily; Lu, Jessica; Tumlinson, Jason 2018-01-01 WFIRST will yield a transformative impact in measuring and characterizing resolved stellar populations in the Milky Way. The proximity and level of detail that such populations need to be studied at directly map to all three pillars of WFIRST capabilities - sensitivity from a 2.4 meter space based telescope, resolution from 0.1" pixels, and large 0.3 degree field of view from multiple detectors. In this poster, we describe the activities of the WFIRST Science Investigation Team (SIT), "Resolving the Milky Way with WFIRST". Notional programs guiding our analysis include targeting sightlines to establish the first well-resolved large scale maps of the Galactic bulge aand central region, pockets of star formation in the disk, benchmark star clusters, and halo substructure and ultra faint dwarf satellites. As an output of this study, our team is building optimized strategies and tools to maximize stellar population science with WFIRST. This will include: new grids of IR-optimized stellar evolution and synthetic spectroscopic models; pipelines and algorithms for optimal data reduction at the WFIRST sensitivity and pixel scale; wide field simulations of Milky Way environments including new astrometric studies; and strategies and automated algorithms to find substructure and dwarf galaxies in the Milky Way through the WFIRST High Latitude Survey. 14. Energy International Nuclear Information System (INIS) Bobin, J.L. 1996-01-01 Object of sciences and technologies, energy plays a major part in economics and relations between nations. Jean-Louis Bobin, physicist, analyses the relations between man and energy and wonders about fears that delivers nowadays technologies bound to nuclear energy and about the fear of a possible shortage of energy resources. (N.C.). 17 refs., 14 figs., 2 tabs 15. Development of a Spatially-Resolved Microwave Interferometer Science.gov (United States) Specht, Paul; Cooper, Marcia 2015-06-01 The development of a spatially-resolved microwave interferometer (SRMI) for non-invasively measuring the internal transit of a shock, detonation, or reaction front in energetic media is presented. Utilizing the transparency of many energetic materials in the RF regime, current microwave interferometers provide continuum-level tracking of the dielectric discontinuity that occurs across a shock or reaction front. While this continuum-level response can provide bulk shock and detonation velocities, it is insufficient to understand the complex wave and material interactions present in heterogeneous energetic materials. Leveraging interferometry and terahertz spectroscopy techniques, a heterodyne, spatially-resolved microwave interferometer was designed. A theoretical description of its operation and potential impact to current energetic materials research is discussed. Preliminary experimental results, including electro-optic sensing of a Doppler shifted microwave beam, are presented. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2015-0308A. 16. Energy CERN Document Server Foland, Andrew Dean 2007-01-01 Energy is the central concept of physics. Unable to be created or destroyed but transformable from one form to another, energy ultimately determines what is and isn''t possible in our universe. This book gives readers an appreciation for the limits of energy and the quantities of energy in the world around them. This fascinating book explores the major forms of energy: kinetic, potential, electrical, chemical, thermal, and nuclear. 17. Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4 : Calculated dehydrogenation enthalpy, including zero point energy, and the structure of the phonon spectra NARCIS (Netherlands) Marashdeh, A.; Frankcombe, T.J. 2008-01-01 The dehydrogenation enthalpies of Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4 have been calculated using density functional theory calculations at the generalized gradient approximation level. Harmonic phonon zero point energy (ZPE) corrections have been included using Parlinski’s direct method. The 18. Resolving Ethical Issues at School Science.gov (United States) Benninga, Jacques S. 2013-01-01 Although ethical dilemmas are a constant in teachers' lives, the profession has offered little in the way of training to help teachers address such issues. This paper presents a framework, based on developmental theory, for resolving professional ethical dilemmas. The Four-Component Model of Moral Maturity, when used in conjunction with a… 19. Time-resolved quantitative phosphoproteomics DEFF Research Database (Denmark) Verano-Braga, Thiago; Schwämmle, Veit; Sylvester, Marc 2012-01-01 proteins involved in the Ang-(1-7) signaling, we performed a mass spectrometry-based time-resolved quantitative phosphoproteome study of human aortic endothelial cells (HAEC) treated with Ang-(1-7). We identified 1288 unique phosphosites on 699 different proteins with 99% certainty of correct peptide... 20. Energy CERN Document Server Robertson, William C 2002-01-01 Confounded by kinetic energy? Suspect that teaching about simple machines isn t really so simple? Exasperated by electricity? If you fear the study of energy is beyond you, this entertaining book will do more than introduce you to the topic. It will help you actually understand it. At the book s heart are easy-to-grasp explanations of energy basics work, kinetic energy, potential energy, and the transformation of energy and energy as it relates to simple machines, heat energy, temperature, and heat transfer. Irreverent author Bill Robertson suggests activities that bring the basic concepts of energy to life with common household objects. Each chapter ends with a summary and an applications section that uses practical examples such as roller coasters and home heating systems to explain energy transformations and convection cells. The final chapter brings together key concepts in an easy-to-grasp explanation of how electricity is generated. Energy is the second book in the Stop Faking It! series published by NS... 1. Energies International Nuclear Information System (INIS) 2003-01-01 In the framework of the National Debate on the energies in a context of a sustainable development some associations for the environment organized a debate on the nuclear interest facing the renewable energies. The first part presents the nuclear energy as a possible solution to fight against the greenhouse effect and the associated problem of the wastes management. The second part gives information on the solar energy and the possibilities of heat and electric power production. A presentation of the FEE (French wind power association) on the situation and the development of the wind power in France, is also provided. (A.L.B.) 2. Minimum resolvable power contrast model Science.gov (United States) Qian, Shuai; Wang, Xia; Zhou, Jingjing 2018-01-01 Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given. 3. Cross section parameterization in the resolved resonance region International Nuclear Information System (INIS) Larson, N.M. 1992-01-01 Experimental techniques, methods, and equipment have evolved to provide more accurate neutron cross section data with better energy resolution. Keeping pace with those developments has been a challenge for data analysts; commensurate improvements in analysis tools are required. In this paper, analysis techniques for neutron time-of-flight data in the resolved resonance region are discussed, with emphasis on contemporary needs 4. Time-resolved x-ray diagnostics International Nuclear Information System (INIS) Lyons, P.B. 1981-01-01 Techniques for time-resolved x-ray diagnostics will be reviewed with emphasis on systems utilizing x-ray diodes or scintillators. System design concerns for high-bandwidth (> 1 GHz) diagnostics will be emphasized. The limitations of a coaxial cable system and a technique for equalizing to improve bandwidth of such a system will be reviewed. Characteristics of new multi-GHz amplifiers will be presented. An example of a complete operational system on the Los Alamos Helios laser will be presented which has a bandwidth near 3 GHz over 38 m of coax. The system includes the cable, an amplifier, an oscilloscope, and a digital camera readout 5. Energy Recovery Linacs Energy Technology Data Exchange (ETDEWEB) Nikolitsa Merminga 2007-06-01 The success and continuing progress of the three operating FELs based on Energy Recovery Linacs (ERLs), the Jefferson Lab IR FEL Upgrade, the Japan Atomic Energy Agency (JAEA) FEL, and the Novosibirsk High Power THz FEL, have inspired multiple future applications of ERLs, which include higher power FELs, synchrotron radiation sources, electron cooling devices, and high luminosity electron-ion colliders. The benefits of using ERLs for these applications are presented. The key accelerator physics and technology challenges of realizing future ERL designs, and recent developments towards resolving these challenges are reviewed. 6. Spatially resolved spectroscopy on semiconductor nanostructures Energy Technology Data Exchange (ETDEWEB) Roessler, Johanna 2009-02-20 Cleared edge overgrowth (CEO) nanostructures are identified and studied by 1D und 2D {mu}PL mapping scans and by time-resolved and power-dependent measurements. Distinct excitonic ground states of 2fold CEO QDs with large localization energies are achieved. The deeper localization reached as compared to the only other report on 2fold CEO QDs in literature is attributed to a new strain-free fabrication process and changed QW thickness in [001] growth. In order to achieve controlled manipulation of 2fold CEO QDs the concept of a CEO structure with three top gates and one back gate is presented. Due to the complexity of this device, a simpler test structure is realized. Measurements on this test structure confirm the necessity to either grow significantly thicker overgrowth layers or to provide separate top gates in all three spatial direction to controllably manipulate 2fold CEO QDs with an external electric field. (orig.) 7. Resonance fluorescence in the resolvent-operator formalism Science.gov (United States) Debierre, V.; Harman, Z. 2017-10-01 The Mollow spectrum for the light scattered by a driven two-level atom is derived in the resolvent operator formalism. The derivation is based on the construction of a master equation from the resolvent operator of the atom-field system. We show that the natural linewidth of the excited atomic level remains essentially unmodified, to a very good level of approximation, even in the strong-field regime, where Rabi flopping becomes relevant inside the self-energy loop that yields the linewidth. This ensures that the obtained master equation and the spectrum derived matches that of Mollow. 8. Constant Matrix Element Approximation to Time-Resolved Angle-Resolved Photoemission Spectroscopy Directory of Open Access Journals (Sweden) James K. Freericks 2016-11-01 Full Text Available We discuss several issues associated with employing a constant matrix element approximation for the coupling of light to multiband electrons in the context of time-resolved angle-resolved photoemission spectroscopy (TR-ARPES. In particular, we demonstrate that the “constant matrix element approximation” —even when reasonable—only holds for specific choices of the one-electron basis, and changing to other bases, requires including nonconstant corrections to the matrix element. We also discuss some simplifying approximations, where a constant matrix element is employed in multiple bases, and the consequences of this further approximation (especially with respect to the calculated TR-ARPES signal becoming negative. We also discuss issues related to gauge invariance of the final spectra. 9. Electronic structures of 1-adamantanol, cyclohexanol and cyclohexanone and anisotropic interactions with He*(23S) atoms: collision-energy-resolved Penning ionization electron spectroscopy combined with quantum chemistry calculations International Nuclear Information System (INIS) Tian Shanxi; Kishimoto, Naoki; Ohno, Koichi 2002-01-01 He I ultraviolet photoelectron spectra and He*(2 3 S) Penning ionization electron spectra have been measured for 1-adamantanol, cyclohexanol and cyclohexanone. Four stable isomeric conformers of cyclohexanol were predicted by Becke's three-parameter hybrid density functional B3LYP/6-31+G(d,p) calculations. Since the orbital reactivity in Penning ionizations is simply related to the electron density extending outside the molecular surface, the theoretical Penning ionization electron spectra were synthesized using the calculated molecular orbital wave functions and ionization potentials. They were in good agreement with the experimental spectra except for the low-electron-energy bands. Collision energy dependence of partial ionization cross sections for the oxygen lone pair orbitals exhibited that there are strong steric hindrances by the neighboring hydrogen atoms in 1-adamantanol and cyclohexanol 10. Time-resolved fluorescence spectroscopy International Nuclear Information System (INIS) Gustavsson, Thomas; Mialocq, Jean-Claude 2007-01-01 This article addresses the evolution in time of light emitted by a molecular system after a brief photo-excitation. The authors first describe fluorescence from a photo-physical point of view and discuss the characterization of the excited state. Then, they explain some basic notions related to fluorescence characterization (lifetime and decays, quantum efficiency, so on). They present the different experimental methods and techniques currently used to study time-resolved fluorescence. They discuss basic notions of time resolution and spectral reconstruction. They briefly present some conventional methods: intensified Ccd cameras, photo-multipliers and photodiodes associated with a fast oscilloscope, and phase modulation. Other methods and techniques are more precisely presented: time-correlated single photon counting (principle, examples, and fluorescence lifetime imagery), streak camera (principle, examples), and optical methods like the Kerr optical effect (principle and examples) and fluorescence up-conversion (principle and theoretical considerations, examples of application) 11. Time and momentum-resolved phonon decay Science.gov (United States) Reis, David 2017-04-01 The high brightness of x-ray free-electron lasers provides us a unique opportunity to measure lattice dynamics directly in the time domain and out of equilibrium. As a first step in this direction we demonstrate how ultrafast optical excitation creates temporal coherences in the mean-square phonon displacements spanning the Brillouin zone by a second-order squeezing process. This leads to broad-bandwidth high-resolution measurements of the phonon dispersion without the need for high-resolution monochromators or analyzers. We will also show how anharmonic phonon decay can be viewed as a parametric squeezing process, and present first momentum-resolved measurements of the downconversion of a coherent optical phonon into pairs of high-wavevector acoustic modes, information that cannot be obtained by spectroscopic measurements in the frequency domain. Supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515. 12. PROPERTIES AND MICROSTRUCTURE OF CEMENT PASTE INCLUDING RECYCLED CONCRETE POWDER Directory of Open Access Journals (Sweden) Jaroslav Topič 2017-02-01 Full Text Available The disposal and further recycling of concrete is being investigated worldwide, because the issue of complete recycling has not yet been fully resolved. A fundamental difficulty faced by researchers is the reuse of the recycled concrete fines which are very small (< 1 mm. Currently, full recycling of such waste fine fractions is highly energy intensive and resulting in production of CO2. Because of this, the only recycling methods that can be considered as sustainable and environmentally friendly are those which involve recycled concrete powder (RCP in its raw form. This article investigates the performance of RCP with the grain size < 0.25 mm as a potential binder replacement, and also as a microfiller in cement-based composites. Here, the RCP properties are assessed, including how mechanical properties and the microstructure are influenced by increasing the amount of the RCP in a cement paste (≤ 25 wt%. 13. Angle-resolved photoelectron spectrometry: new electron optics and detection system International Nuclear Information System (INIS) Hoof, H.A. van. 1980-01-01 A new spectrometer system is described, designed to measure angle-resolved energy distributions of photoemitted electrons efficiently. Some results are presented of measurements on a Si(001) surface. (Auth.) 14. Panchromatic SED modelling of spatially resolved galaxies Science.gov (United States) Smith, Daniel J. B.; Hayward, Christopher C. 2018-05-01 We test the efficacy of the energy-balance spectral energy distribution (SED) fitting code MAGPHYS for recovering the spatially resolved properties of a simulated isolated disc galaxy, for which it was not designed. We perform 226 950 MAGPHYS SED fits to regions between 0.2 and 25 kpc in size across the galaxy's disc, viewed from three different sight-lines, to probe how well MAGPHYS can recover key galaxy properties based on 21 bands of UV-far-infrared model photometry. MAGPHYS yields statistically acceptable fits to >99 per cent of the pixels within the r-band effective radius and between 59 and 77 percent of pixels within 20 kpc of the nucleus. MAGPHYS is able to recover the distribution of stellar mass, star formation rate (SFR), specific SFR, dust luminosity, dust mass, and V-band attenuation reasonably well, especially when the pixel size is ≳ 1 kpc, whereas non-standard outputs (stellar metallicity and mass-weighted age) are recovered less well. Accurate recovery is more challenging in the smallest sub-regions of the disc (pixel scale ≲ 1 kpc), where the energy balance criterion becomes increasingly incorrect. Estimating integrated galaxy properties by summing the recovered pixel values, the true integrated values of all parameters considered except metallicity and age are well recovered at all spatial resolutions, ranging from 0.2 kpc to integrating across the disc, albeit with some evidence for resolution-dependent biases. These results must be considered when attempting to analyse the structure of real galaxies with actual observational data, for which the ground truth' is unknown. 15. Highly-resolving Rutherford-scattering spectroscopy with heavy ions International Nuclear Information System (INIS) Klein, C. 2003-10-01 in the present thesis for the first time the Browne-Buechner spectrometer for the highly resolving ion-beam analysis in the ion beam center Rossendorf is completely presented. A main topic of this theis lied in the apparative construction and the taking-into-operation of the spectrometer and the scattering chamber including the facilities for the sample treatment and characterization. In the framework of this thesis for the chosen measurement arrangement the experimental conditions were elaborated, which allow the routine-like application of the spectrometer for analyses of thin-film systems. for C and Li ions as incident particles especially the straggling was more precisely determined in a large range of materials. By means of the spectrometer also the interaction of the ion with the solid respectively single atoms on its surface could be studied. For the first time the mean charge-state after the single collision on a gold atom was determined for differently heavy ions in a wide energy range 16. Flavour from partially resolved singularities Energy Technology Data Exchange (ETDEWEB) Bonelli, G. [International School of Advanced Studies (SISSA) and INFN, Sezione di Trieste, via Beirut 2-4, 34014 Trieste (Italy)]. E-mail: bonelli@sissa.it; Bonora, L. [International School of Advanced Studies (SISSA) and INFN, Sezione di Trieste, via Beirut 2-4, 34014 Trieste (Italy); Ricco, A. [International School of Advanced Studies (SISSA) and INFN, Sezione di Trieste, via Beirut 2-4, 34014 Trieste (Italy) 2006-06-15 In this Letter we study topological open string field theory on D-branes in a IIB background given by non-compact CY geometries O(n)-bar O(-2-n) on P{sup 1} with a singular point at which an extra fiber sits. We wrap N D5-branes on P{sup 1} and M effective D3-branes at singular points, which are actually D5-branes wrapped on a shrinking cycle. We calculate the holomorphic Chern-Simons partition function for the above models in a deformed complex structure and find that it reduces to multi-matrix models with flavour. These are the matrix models whose resolvents have been shown to satisfy the generalized Konishi anomaly equations with flavour. In the n=0 case, corresponding to a partial resolution of the A{sub 2} singularity, the quantum superpotential in the N=1 unitary SYM with one adjoint and M fundamentals is obtained. The n=1 case is also studied and shown to give rise to two-matrix models which for a particular set of couplings can be exactly solved. We explicitly show how to solve such a class of models by a quantum equation of motion technique. 17. Time resolved heat exchange in driven quantum systems Science.gov (United States) Florencia Ludovico, María; Lim, Jong Soo; Moskalets, Michael; Arrachea, Liliana; Sánchez, David 2014-12-01 We study time-dependent heat transport in systems composed of a resonant level periodically forced with an external power source and coupled to a fermionic continuum. This simple model contains the basic ingredients to understand time resolved energy exchange in quantum capacitors that behave as single particle emitters. We analyse the behaviour of the dynamic heat current for driving frequencies within the non-adiabatic regime, showing that it does not obey a Joule dissipation law. 18. Direct angle resolved photoemission spectroscopy and ... Keywords. Condensed matter physics; high-c superconductivity; electronic properties; photoemission spectroscopy; angle resolved photoemission spectroscopy; cuprates; films; strain; pulsed laser deposition. 19. Pump apparatus including deconsolidator Energy Technology Data Exchange (ETDEWEB) Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew 2014-10-07 A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage. 20. Resolving colocalization of bacteria and metal(loid)s on plant root surfaces by combining fluorescence in situ hybridization (FISH) with multiple-energy micro-focused X-ray fluorescence (ME μXRF). Science.gov (United States) Honeker, Linnea K; Root, Robert A; Chorover, Jon; Maier, Raina M 2016-12-01 Metal(loid)-contamination of the environment due to anthropogenic activities is a global problem. Understanding the fate of contaminants requires elucidation of biotic and abiotic factors that influence metal(loid) speciation from molecular to field scales. Improved methods are needed to assess micro-scale processes, such as those occurring at biogeochemical interfaces between plant tissues, microbial cells, and metal(loid)s. Here we present an advanced method that combines fluorescence in situ hybridization (FISH) with synchrotron-based multiple-energy micro-focused X-ray fluorescence microprobe imaging (ME μXRF) to examine colocalization of bacteria and metal(loid)s on root surfaces of plants used to phytostabilize metalliferous mine tailings. Bacteria were visualized on a small root section using SytoBC nucleic acid stain and FISH probes targeting the domain Bacteria and a specific group (Alphaproteobacteria, Gammaproteobacteria, or Actinobacteria). The same root region was then analyzed for elemental distribution and metal(loid) speciation of As and Fe using ME μXRF. The FISH and ME μXRF images were aligned using ImageJ software to correlate microbiological and geochemical results. Results from quantitative analysis of colocalization show a significantly higher fraction of As colocalized with Fe-oxide plaques on the root surfaces (fraction of overlap 0.49±0.19) than to bacteria (0.072±0.052) (proots, metal(loid)s and microbes, information that should lead to improved mechanistic models of metal(loid) speciation and fate. Copyright © 2016 Elsevier B.V. All rights reserved. 1. Time resolved two- and three-dimensional plasma diagnostics International Nuclear Information System (INIS) 1991-03-01 This collection of papers on diagnostics in fusion plasmas contains work on the data analysis of inverse problems and on the experimental arrangements presently used to obtain spatially and temporally resolved plasma radial profiles, including electron and ion temperature, plasma density and plasma current profiles. Refs, figs and tabs 2. Magnetic Resonance Microscopy Spatially Resolved NMR Techniques and Applications CERN Document Server Codd, Sarah 2008-01-01 This handbook and ready reference covers materials science applications as well as microfluidic, biomedical and dental applications and the monitoring of physicochemical processes. It includes the latest in hardware, methodology and applications of spatially resolved magnetic resonance, such as portable imaging and single-sided spectroscopy. For materials scientists, spectroscopists, chemists, physicists, and medicinal chemists. 3. Optical modulator including grapene Science.gov (United States) Liu, Ming; Yin, Xiaobo; Zhang, Xiang 2016-06-07 The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair. 4. Enzyme reactions and their time resolved measurements International Nuclear Information System (INIS) Hajdu, Janos 1990-01-01 This paper discusses experimental strategies in data collection with the Laue method and summarises recent results using synchrotron radiation. Then, an assessment is made of the progress towards time resolved studies with protein crystals and the problems that remain. The paper consists of three parts which respectively describe some aspects of Laue diffraction, recent examples of structural results from Laue diffraction, and kinetic Laue crystallography. In the first part, characteristics of Laue diffraction is discussed first, focusing on the harmonics problems, spatials problem, wavelength normalization, low resolution hole, data completeness, and uneven coverage of reciprocal space. Then, capture of the symmetry unique reflection set is discussed focusing on the effect of wavelength range on the number of reciprocal lattice points occupying diffracting positions, effect of crystal to film distance and the film area and shape on the number of reflections captured, and effect of crystal symmetry on the number of unique reflections within the number of reflections captured. The second part addresses the determination of the structure of turkey egg white lysozyme, and calcium binding in tomato bushy stunt virus. The third part describes the initiation of reactions in enzyme crystals, picosecond Laue diffraction at high energy storage rings, and detectors. (N.K.) 5. RAiSE II: resolved spectral evolution in radio AGN Science.gov (United States) Turner, Ross J.; Rogers, Jonathan G.; Shabala, Stanislav S.; Krause, Martin G. H. 2018-01-01 The active galactic nuclei (AGN) lobe radio luminosities modelled in hydrodynamical simulations and most analytical models do not address the redistribution of the electron energies due to adiabatic expansion, synchrotron radiation and inverse-Compton scattering of cosmic microwave background photons. We present a synchrotron emissivity model for resolved sources that includes a full treatment of the loss mechanisms spatially across the lobe, and apply it to a dynamical radio source model with known pressure and volume expansion rates. The bulk flow and dispersion of discrete electron packets is represented by tracer fields in hydrodynamical simulations; we show that the mixing of different aged electrons strongly affects the spectrum at each point of the radio map in high-powered Fanaroff & Riley type II (FR-II) sources. The inclusion of this mixing leads to a factor of a few discrepancy between the spectral age measured using impulsive injection models (e.g. JP model) and the dynamical age. The observable properties of radio sources are predicted to be strongly frequency dependent: FR-II lobes are expected to appear more elongated at higher frequencies, while jetted FR-I sources appear less extended. The emerging FR0 class of radio sources, comprising gigahertz peaked and compact steep spectrum sources, can potentially be explained by a population of low-powered FR-Is. The extended emission from such sources is shown to be undetectable for objects within a few orders of magnitude of the survey detection limit and to not contribute to the curvature of the radio spectral energy distribution. 6. RESOLVE's Field Demonstration on Mauna Kea, Hawaii 2010 Science.gov (United States) Captain, Janine; Quinn, Jacqueline; Moss, Thomas; Weis, Kyle 2010-01-01 In cooperation with the Canadian Space Agency, and the Northern Centre for Advanced Technology, Inc., NASA has undertaken the In-Situ Resource Utilization (ISRU) project called RESOLVE (Regolith and Environment Science & Oxygen and Lunar Volatile Extraction). This project is an Earth-based lunar precursor demonstration of a system that could be sent to explore permanently shadowed polar lunar craters, where it would drill into regolith, quantify the volatiles that are present, and extract oxygen by hydrogen reduction of iron oxides. The resulting water could be electrolyzed into oxygen to support exploration and hydrogen, which would be recycled through the process. The RESOLVE chemical processing system was mounted on a Canadian Space Agency mobility chasis and successfully demonstrated on Hawaii's Mauna Kea volcano in February 2010. The RESOLVE unit is the initial prototype of a robotic prospecting mission to the Moon. RESOLVE is designed to go to the poles of the Moon to "ground truth" the form and concentration of the hydrogen/water/hydroxyl that has been seen from orbit (M3, Lunar Prospector and LRO) and to test technologies to extract oxygen from the lunar regolith. RESOLVE has the ability to capture a one-meter core sample of lunar regolith and heat it to determine the volatiles that may be released and then demonstrate the production of oxygen from minerals found in the regolith. The RESOLVE project, which is led by KSC, is a multi-center and multi-organizational effort that includes representatives from KSC, JSC, GRC, the Canadian Space Agency, and the Northern Center for Advanced Technology (NORCAT). This paper details the results obtained from four days of lunar analog testing that included gas chromatograph analysis for volatile components, remote control of chemistry and drilling operations via satalite communications, and real-time water quantification using a novel capacitance measurement technique. 7. Batteries not included International Nuclear Information System (INIS) Cooper, M. 2001-01-01 This article traces the development of clockwork wind-up battery chargers that can be used to recharge mobile phones, laptop computers, torches or radio batteries from the pioneering research of the British inventor Trevor Baylis to the marketing of the wind-up gadgets by Freeplay Energy who turned the idea into a commercial product. The amount of cranking needed to power wind-up devices is discussed along with a hand-cranked charger for mobile phones, upgrading the phone charger's mechanism, and drawbacks of the charger. Details are given of another invention using a hand-cranked generator with a supercapacitor as a storage device which has a very much higher capacity for storing electrical charge 8. Time resolved spectroscopic studies on some nanophosphors Wintec Abstract. Time resolved spectroscopy is an important tool for studying photophysical processes in phosphors. Present work investigates the steady state and time resolved photoluminescence (PL) spectroscopic characteristics of ZnS, ZnO and (Zn, Mg)O nanophosphors both in powder as well as thin film form. 9. Wasted energy? NARCIS (Netherlands) E.M. Steg 1999-01-01 Original title: Verspilde energie? Many environmental problems are increasing primarily due to rising production and consumption, in other words due to the behaviour of consumers. Accordingly, there is a growing realisation that environmental problems must be partly resolved through a change 10. Resolving the inner disk of UX Orionis Science.gov (United States) Kreplin, A.; Madlener, D.; Chen, L.; Weigelt, G.; Kraus, S.; Grinin, V.; Tambovtseva, L.; Kishimoto, M. 2016-05-01 Aims: The cause of the UX Ori variability in some Herbig Ae/Be stars is still a matter of debate. Detailed studies of the circumstellar environment of UX Ori objects (UXORs) are required to test the hypothesis that the observed drop in photometry might be related to obscuration events. Methods: Using near- and mid-infrared interferometric AMBER and MIDI observations, we resolved the inner circumstellar disk region around UX Ori. Results: We fitted the K-, H-, and N-band visibilities and the spectral energy distribution (SED) of UX Ori with geometric and parametric disk models. The best-fit K-band geometric model consists of an inclined ring and a halo component. We obtained a ring-fit radius of 0.45 ± 0.07 AU (at a distance of 460 pc), an inclination of 55.6 ± 2.4°, a position angle of the system axis of 127.5 ± 24.5°, and a flux contribution of the over-resolved halo component to the total near-infrared excess of 16.8 ± 4.1%. The best-fit N-band model consists of an elongated Gaussian with a HWHM ~ 5 AU of the semi-major axis and an axis ration of a/b ~ 3.4 (corresponding to an inclination of ~72°). With a parametric disk model, we fitted all near- and mid-infrared visibilities and the SED simultaneously. The model disk starts at an inner radius of 0.46 ± 0.06 AU with an inner rim temperature of 1498 ± 70 K. The disk is seen under an nearly edge-on inclination of 70 ± 5°. This supports any theories that require high-inclination angles to explain obscuration events in the line of sight to the observer, for example, in UX Ori objects where orbiting dust clouds in the disk or disk atmosphere can obscure the central star. Based on observations made with ESO telescopes at Paranal Observatory under program IDs: 090.C-0769, 074.C-0552. 11. Enhanced Research Opportunity to Study the Atmospheric Forcing by High-Energy Particle Precipitation at High Latitudes: Emerging New Satellite Data and the new Ground-Based Observations in Northern Scandinavia, including the EISCAT_3D Incoherent Scatter Facility. Science.gov (United States) Turunen, E. S.; Ulich, T.; Kero, A.; Tero, R.; Verronen, P. T.; Norberg, J.; Miyoshi, Y.; Oyama, S. I.; Saito, S.; Hosokawa, K.; Ogawa, Y. 2017-12-01 Recent observational and model results on the particle precipitation as source of atmospheric variability challenge us to implement better and continuously monitoring observational infrastructure for middle and upper atmospheric research. An example is the effect of high-energy electron precipitation during pulsating aurora on mesospheric ozone, the concentration of which may be reduced by several tens of percent, similarily as during some solar proton events, which are known to occur more rarely than pulsating aurora. So far the Assessment Reports by the Intergovernmental Panel on Climate Change did not include explicitely the particle forcing of middle and upper atmosphere in their climate model scenarios. This will appear for the first time in the upcoming climate simulations. We review recent results related to atmospheric forcing by particle precipitation via effects on chemical composition. We also show the research potential of new ground-based radio measurement techniques, such as spectral riometry and incoherent scatter by new phased-array radars, such as EISCAT_3D, which will be a volumetric, 3- dimensionally imaging radar, distributed in Norway, Sweden, and Finland. It is expected to be operational from 2020 onwards, surpassing all the current IS radars of the world in technology. It will be able to produce continuous information of ionospheric plasma parameters in a volume, including 3D-vector plasma velocities. For the first time we will be able to map the 3D electric currents in ionosphere, as well as we will have continuous vector wind measurements in mesosphere. The geographical area covered by the EISCAT_3D measurements can be expanded by suitably selected other continuous observations, such as optical and satellite tomography networks. A new 100 Hz all-sky camera network was recently installed in Northern Scandinavia in order to support the Japanese Arase satellite mission. In near future the ground-based measurement network will also include new 12. Functional Imaging of Hybrid Nanostructures. Visualization of Mechanisms for Solar Energy Utilization. Northwestern FG-02-07ER46401 Final Report Energy Technology Data Exchange (ETDEWEB) Lauhon, Lincoln J. [Northwestern Univ., Evanston, IL (United States) 2015-03-20 The report describes advances in understanding the interaction of light with hybrid nanostructured materials, and the influence of physical and electronic structure on the flow of excess energetic charge carriers to support the design and optimization of new materials for photoelectrical and photoelectrochemical energy conversion. Raman scattering, multi-wavelength optical excitation, and numerical modeling are combined with electrical transport measurements on model hybrid materials structures and devices to resolve, in energy and space, the absorption of light, the generation of excess energetic charge carriers, and the efficiency of their separation to generate electrical and chemical energy. Appropriate combinations of spatially-resolved, time-resolved, and spectrally-resolved measurements are used to isolate and quantify various steps in the energy conversion process, including geometrically and plasmonically enhanced absorption, the generation of carriers with excess energy, and the efficiency with which the carriers can move to and perform useful chemistry at interfaces. 13. The Ontario Energy Marketers Association Energy Technology Data Exchange (ETDEWEB) Baker, W.F.C. [Ontario Energy Marketers Association, ON (Canada) 1998-12-31 An overview of the role of the Ontario Energy Marketers Association (OEMA) and its future orientation was presented. Participants in the OEMA include agents, brokers, marketers, local distribution companies, public interest representatives, associations and government representatives. The role of the OEMA is to encourage open competition for the benefit and protection of all energy consumer and market participants. As well, the OEMA serves as a forum for key industry stakeholders to resolve market issues outside the regulatory arena, set standards and codes of practice, establish customer education programs, and develop industry input into public policy making. 14. The Resolved Stellar Populations Early Release Science Program Science.gov (United States) Weisz, Daniel; Anderson, J.; Boyer, M.; Cole, A.; Dolphin, A.; Geha, M.; Kalirai, J.; Kallivayalil, N.; McQuinn, K.; Sandstrom, K.; Williams, B. 2017-11-01 We propose to obtain deep multi-band NIRCam and NIRISS imaging of three resolved stellar systems within 1 Mpc (NOI 104). We will use this broad science program to optimize observational setups and to develop data reduction techniques that will be common to JWST studies of resolved stellar populations. We will combine our expertise in HST resolved star studies with these observations to design, test, and release point spread function (PSF) fitting software specific to JWST. PSF photometry is at the heart of resolved stellar populations studies, but is not part of the standard JWST reduction pipeline. Our program will establish JWST-optimized methodologies in six scientific areas: star formation histories, measurement of the sub-Solar mass stellar IMF, extinction maps, evolved stars, proper motions, and globular clusters, all of which will be common pursuits for JWST in the local Universe. Our observations of globular cluster M92, ultra-faint dwarf Draco II, and star-forming dwarf WLM, will be of high archival value for other science such as calibrating stellar evolution models, measuring properties of variable stars, and searching for metal-poor stars. We will release the results of our program, including PSF fitting software, matched HST and JWST catalogs, clear documentation, and step-by-step tutorials (e.g., Jupyter notebooks) for data reduction and science application, to the community prior to the Cycle 2 Call for Proposals. We will host a workshop to help community members plan their Cycle 2 observations of resolved stars. Our program will provide blueprints for the community to efficiently reduce and analyze JWST observations of resolved stellar populations. 15. Demonstration of Resolving Urban Problems by Applying Smart Technology. Science.gov (United States) Kim, Y. 2016-12-01 Recently, movements to seek various alternatives are becoming more active around the world to resolve urban problems related to energy, water, a greenhouse gas, and disaster by utilizing smart technology system. The purpose of this study is to evaluate service verification aimed at demonstration region applied with actual smart technology in order to raise the efficiency of the service and explore solutions for urban problems. This process must be required for resolving urban problems in the future and establishing integration platform' for sustainable development. The demonstration region selected in this study to evaluate service verification is Busan' in Korea. Busan adopted 16 services in 4 sections last year and begun demonstration to improve quality of life and resolve urban environment problems. In addition, Busan participated officially in Global City Teams Challenge (GCTC)' held by National Institute of Standards and Technology (NIST) in USA last year and can be regarded as representative demonstration region in Korea. The result of survey showed that there were practical difficulties as explained below in the demonstration for resolving urban problems by applying smart technology. First, the participation for demonstration was low because citizens were either not aware or did not realize the demonstration. Second, after demonstrating various services at low cost, it resulted in less effect of service demonstration. Third, as functions get fused, it was found that management department, application criteria of technology and its process were ambiguous. In order to increase the efficiency of the demonstration for the rest of period through the result of this study, it is required to draw demand that citizens requires in order to raise public participation. In addition, it needs to focus more on services which are wanted to demonstrate rather than various service demonstrations. Lastly, it is necessary to build integration platform through cooperation 16. Introduction to theory and analysis of resolved (and unresolved) neutron resonances via SAMMY International Nuclear Information System (INIS) Larson, N.M. 1998-07-01 Neutron cross-section data are important for two distinct purposes: first, they provide insight into the nature of matter, thus assisting in the understanding of fundamental physics; second, they are needed for practical applications (e.g., for calculating when and how a reactor will become critical, or how much shielding is needed for storage of nuclear materials, and for medical applications). Neutron cross section data in the resolved-resonance region are generally obtained by time-of-flight experiments, which must be carefully analyzed if they are to be properly understood and utilized. In this paper, important features of the analysis process are discussed, with emphasis on the particular technique used in the analysis code SAMMY. Other features of the code are also described; these include such topics as calculation of group cross sections (including covariance matrices), generation and fitting of integral quantities, and extensions into the unresolved-resonance region and higher-energy regions 17. Introduction to the theory and analysis of resolved (and unresolved) neutron resonances via SAMMY Energy Technology Data Exchange (ETDEWEB) Larson, N.M. 1998-02-01 Neutron cross-section data are important for two distinct purposes: First, they provide insight into the nature of matter, thus assisting in the understanding of fundamental physics. Second, they are needed for practical applications (e.g., for calculating when and how a reactor will become critical, or how much shielding is needed for storage of nuclear materials, and for medical applications). Neutron cross section data in the resolved-resonance region are generally obtained by time-of-flight experiments, which must be carefully analyzed if they are to be properly understood and utilized. In this paper, important features of the analysis process are discussed, with emphasis on the particular techniques used in the analysis code SAMMY. Other features of the code are also described; these include such topics as calculation of group cross sections (including covariance matrices), generation and fitting of integral quantities, and extensions into the unresolved-resonance region and higher energy regions. 18. Introduction to the Theory and Analysis of Resolved (and Unresolved) Neutron Resonances via SAMMY Energy Technology Data Exchange (ETDEWEB) Larson, N. 2000-03-13 Neutron cross-section data are important for two purposes: First, they provide insight into the nature of matter, increasing our understanding of fundamental physics. Second, they are needed for practical applications (e.g., for calculating when and how a reactor will become critical, or how much shielding is needed for storage of nuclear materials, or for medical applications). Neutron cross section data in the resolved-resonance region are generally obtained by time-of-flight experiments, which must be carefully analyzed if they are to be properly understood and utilized. In this report, important features of the analysis process are discussed, with emphasis on the particular techniques used in the analysis code SAMMY. Other features of the code are also described; these include such topics as calculation of group cross sections (including covariance matrices), generation and fitting of integral quantities, and extensions into the unresolved-resonance region and higher-energy regions. 19. Introduction to theory and analysis of resolved (and unresolved) neutron resonances via SAMMY Energy Technology Data Exchange (ETDEWEB) Larson, N.M. 1998-07-01 Neutron cross-section data are important for two distinct purposes: first, they provide insight into the nature of matter, thus assisting in the understanding of fundamental physics; second, they are needed for practical applications (e.g., for calculating when and how a reactor will become critical, or how much shielding is needed for storage of nuclear materials, and for medical applications). Neutron cross section data in the resolved-resonance region are generally obtained by time-of-flight experiments, which must be carefully analyzed if they are to be properly understood and utilized. In this paper, important features of the analysis process are discussed, with emphasis on the particular technique used in the analysis code SAMMY. Other features of the code are also described; these include such topics as calculation of group cross sections (including covariance matrices), generation and fitting of integral quantities, and extensions into the unresolved-resonance region and higher-energy regions. 20. Angle-resolved photoemission spectra of graphene from first-principles calculations. Science.gov (United States) Park, Cheol-Hwan; Giustino, Feliciano; Spataru, Catalin D; Cohen, Marvin L; Louie, Steven G 2009-12-01 Angle-resolved photoemission spectroscopy (ARPES) is a powerful experimental technique for directly probing electron dynamics in solids. The energy versus momentum dispersion relations and the associated spectral broadenings measured by ARPES provide a wealth of information on quantum many-body interaction effects. In particular, ARPES allows studies of the Coulomb interaction among electrons (electron-electron interactions) and the interaction between electrons and lattice vibrations (electron-phonon interactions). Here, we report ab initio simulations of the ARPES spectra of graphene including both electron-electron and electron-phonon interactions on the same footing. Our calculations reproduce some of the key experimental observations related to many-body effects, including the indication of a mismatch between the upper and lower halves of the Dirac cone. 1. Recent trends in spin-resolved photoelectron spectroscopy Science.gov (United States) Okuda, Taichi 2017-12-01 Since the discovery of the Rashba effect on crystal surfaces and also the discovery of topological insulators, spin- and angle-resolved photoelectron spectroscopy (SARPES) has become more and more important, as the technique can measure directly the electronic band structure of materials with spin resolution. In the same way that the discovery of high-Tc superconductors promoted the development of high-resolution angle-resolved photoelectron spectroscopy, the discovery of this new class of materials has stimulated the development of new SARPES apparatus with new functions and higher resolution, such as spin vector analysis, ten times higher energy and angular resolution than conventional SARPES, multichannel spin detection, and so on. In addition, the utilization of vacuum ultra violet lasers also opens a pathway to the realization of novel SARPES measurements. In this review, such recent trends in SARPES techniques and measurements will be overviewed. 2. Spatially and time resolved kinetics of indirect magnetoexcitons Science.gov (United States) Hasling, Matthew; Dorow, Chelsey; Calman, Erica; Butov, Leonid; Wilkes, Joe; Campman, Kenneth; Gossard, Arthur The small exciton mass and binding energy give the opportunity to realize the high magnetic field regime for excitons in magnetic fields of few Tesla achievable in lab Long lifetimes of indirect exciton give the opportunity to study kinetics of magnetoexciton transport by time-resolved optical imaging of exciton emission. We present spatially and time resolved measurements showing the effect of increased magnetic field on transport of magnetoexcitons. We observe that increased magnetic field leads to slowing down of magnetoexciton transport. Supported by NSF Grant No. 1407277. J.W. was supported by the EPSRC (Grant EP/L022990/1). C.J.D. was supported by the NSF Graduate Research Fellowship Program under Grant No. DGE-1144086. 3. Energy storage International Nuclear Information System (INIS) Anon. 1992-01-01 This chapter discusses the role that energy storage may have on the energy future of the US. The topics discussed in the chapter include historical aspects of energy storage, thermal energy storage including sensible heat storage, latent heat storage, thermochemical heat storage, and seasonal heat storage, electricity storage including batteries, pumped hydroelectric storage, compressed air energy storage, and superconducting magnetic energy storage, and production and combustion of hydrogen as an energy storage option 4. Differential resolvents of minimal order and weight Directory of Open Access Journals (Sweden) John Michael Nahay 2004-01-01 Full Text Available We will determine the number of powers of α that appear with nonzero coefficient in an α-power linear differential resolvent of smallest possible order of a univariate polynomial P(t whose coefficients lie in an ordinary differential field and whose distinct roots are differentially independent over constants. We will then give an upper bound on the weight of an α-resolvent of smallest possible weight. We will then compute the indicial equation, apparent singularities, and Wronskian of the Cockle α-resolvent of a trinomial and finish with a related determinantal formula. 5. Time resolved spectrometry on the CLIC Test Facility 3 CERN Document Server Lefèvre, T; Braun, H H; Bravin, E; Burger, S; Corsini, R; Döbert, Steffen; Dutriat, C; Tecker, F A; Urschütz, Peter; Welsch, C P 2006-01-01 The high charge (>6ìC) electron beam produced in the CLIC Test Facility 3 (CTF3) is accelerated in fully beam loaded cavities. To be able to measure the resulting strong transient effects, the time evolution of the beam energy and its energy spread must be determined with at least 50MHz bandwidth. Three spectrometer lines are installed along the linac in order to control and tune the beam. The electrons are deflected by dipole magnets onto Optical Transition Radiation (OTR) screens which are observed by CCD cameras. The measured horizontal beam size is then directly related to the energy spread. In order to provide time-resolved energy spectra, a fraction of the OTR photons is sent onto a multi-channel photomultiplier. The overall setup is described, special focus is given to the design of the OTR screen with its synchrotron radiation shielding. The performance of the time-resolved measurements are discussed in detail. Finally, the limitations of the system, mainly due to radiation problems are discussed. 6. Time-resolved absorption measurements on OMEGA International Nuclear Information System (INIS) Jaanimagi, P.A.; DaSilva, L.; Delettrez, J.; Gregory, G.G.; Richardson, M.C. 1986-01-01 Time-resolved measurements of the incident laser light that is scattered and/or refracted from targets irradiated by the 24 uv-beam OMEGA laser at LLE, have provided some interesting features related to time-resolved absorption. The decrease in laser absorption characteristic of irradiating a target that implodes during the laser pulse has been observed. The increase in absorption expected as the critical density surface moves from a low to a high Z material in the target has also been noted. The detailed interpretation of these results is made through comparisons with simulation using the code LILAC, as well as with streak data from time-resolved x-ray imaging and spectroscopy. In addition, time and space-resolved imaging of the scattered light yields information on laser irradiation uniformity conditions on the target. The report consists of viewgraphs 7. Component resolved testing for allergic sensitization DEFF Research Database (Denmark) Skamstrup Hansen, Kirsten; Poulsen, Lars K 2010-01-01 Component resolved diagnostics introduces new possibilities regarding diagnosis of allergic diseases and individualized, allergen-specific treatment. Furthermore, refinement of IgE-based testing may help elucidate the correlation or lack of correlation between allergenic sensitization and allergi... 8. Resolving Inconsistencies in de Broglie's Relation Directory of Open Access Journals (Sweden) Wagener P. 2010-01-01 Full Text Available Modern quantum theory is based on de Broglie's relation between momentum and wave-length. In this article we investigate certain inconsistencies in its formulation and propose a reformulation to resolve them. 9. Time-Resolved Fluorescence in Photodynamic Therapy Directory of Open Access Journals (Sweden) Shu-Chi Allison Yeh 2014-12-01 Full Text Available Photodynamic therapy (PDT has been used clinically for treating various diseases including malignant tumors. The main advantages of PDT over traditional cancer treatments are attributed to the localized effects of the photochemical reactions by selective illumination, which then generate reactive oxygen species and singlet oxygen molecules that lead to cell death. To date, over- or under-treatment still remains one of the major challenges in PDT due to the lack of robust real-time dose monitoring techniques. Time-resolved fluorescence (TRF provides fluorescence lifetime profiles of the targeted fluorophores. It has been demonstrated that TRF offers supplementary information in drug-molecular interactions and cell responses compared to steady-state intensity acquisition. Moreover, fluorescence lifetime itself is independent of the light path; thus it overcomes the artifacts given by diffused light propagation and detection geometries. TRF in PDT is an emerging approach, and relevant studies to date are scattered. Therefore, this review mainly focuses on summarizing up-to-date TRF studies in PDT, and the effects of PDT dosimetric factors on the measured TRF parameters. From there, potential gaps for clinical translation are also discussed. 10. Vibrational excitation and vibrationally resolved electronic excitation cross sections of positron-H2 scattering Science.gov (United States) Zammit, Mark; Fursa, Dmitry; Savage, Jeremy; Bray, Igor 2016-09-01 Vibrational excitation and vibrationally resolved electronic excitation cross sections of positron-H2 scattering have been calculated using the single-centre molecular convergent close-coupling (CCC) method. The adiabatic-nuclei approximation was utilized to model the above scattering processes and obtain the vibrationally resolved positron-H2 scattering length. As previously demonstrated, the CCC results are converged and accurately account for virtual and physical positronium formation by coupling basis functions with large orbital angular momentum. Here vibrationally resolved integrated and differential cross sections are presented over a wide energy range and compared with previous calculations and available experiments. Los Alamos National Laboratory and Curtin University. 11. Time-resolved photoelectron nano-spectroscopy of individual silver particles: Perspectives and limitations DEFF Research Database (Denmark) Rohmer, Martin; Bauer, Michael; Leissner, Till 2010-01-01 Simultaneous time- and energy-resolved two-photon photoemission with nanometer resolution is demonstrated for the first time. We monitor the energy dependence of the decay dynamics of electron excitations in individual silver particles, which were deposited from a gas aggregation cluster source... 12. Wind energy International Nuclear Information System (INIS) Anon. 1992-01-01 This chapter discusses the role wind energy may have in the energy future of the US. The topics discussed in the chapter include historical aspects of wind energy use, the wind energy resource, wind energy technology including intermediate-size and small wind turbines and intermittency of wind power, public attitudes toward wind power, and environmental, siting and land use issues 13. PHOTON09. Proceedings of the international conference on the structure and interactions of the photon including the 18th international workshop on photon-photon collisions and the international workshop on high energy photon linear colliders Energy Technology Data Exchange (ETDEWEB) Behnke, Olaf; Diehl, Markus; Schoerner-Sadenius, Thomas; Steinbrueck, Georg (eds.) 2010-01-15 The following topics were dealt with: Electroweak and new physics, photon-collider technology, low-energy photon experiments, prompt photons, photon structure, jets and heavy flavours, vacuum polarization and light-by-light scattering, small-x processes, diffraction, total cross sections, exclusive channels and resonances, photons in astroparticle physics. (HSI) 14. PHOTON09. Proceedings of the international conference on the structure and interactions of the photon including the 18th international workshop on photon-photon collisions and the international workshop on high energy photon linear colliders International Nuclear Information System (INIS) Behnke, Olaf; Diehl, Markus; Schoerner-Sadenius, Thomas; Steinbrueck, Georg 2010-01-01 The following topics were dealt with: Electroweak and new physics, photon-collider technology, low-energy photon experiments, prompt photons, photon structure, jets and heavy flavours, vacuum polarization and light-by-light scattering, small-x processes, diffraction, total cross sections, exclusive channels and resonances, photons in astroparticle physics. (HSI) 15. Energy Technology. Science.gov (United States) Eaton, William W. Reviewed are technological problems faced in energy production including locating, recovering, developing, storing, and distributing energy in clean, convenient, economical, and environmentally satisfactory manners. The energy resources of coal, oil, natural gas, hydroelectric power, nuclear energy, solar energy, geothermal energy, winds, tides,… 16. In-pile Thermal Conductivity Characterization with Time Resolved Raman Energy Technology Data Exchange (ETDEWEB) Wang, Xinwei 2018-03-19 Executive Summary The project is designed to achieve three objectives: (1) Develop a novel time resolved Raman technology for direct measurement of fuel and cladding thermal conductivity. (2) Validate and improve the technology development by measuring ceramic materials germane to the nuclear industry. (3) Conduct instrumentation development to integrate optical fiber into our sensing system for eventual in-pile measurement. We have developed three new techniques: time-domain differential Raman (TD-Raman), frequency-resolved Raman (FR-Raman), and energy transport state-resolved Raman (ET-Raman). The TD-Raman varies the laser heating time and does simultaneous Raman thermal probing, the FR-Raman probes the material’s thermal response under periodical laser heating of different frequencies, and the ET-Raman probes the thermal response under steady and pulsed laser heating. The measurement capacity of these techniques have been fully assessed and verified by measuring micro/nanoscale materials. All these techniques do not need the data of laser absorption and absolute material temperature rise, yet still be able to measure the thermal conductivity and thermal diffusivity with unprecedented accuracy. It is expected they will have broad applications for in-pile thermal characterization of nuclear materials based on pure optical heating and sensing. 17. A new scintillation counter with very fast resolving time (1961) International Nuclear Information System (INIS) Koch, L. 1961-01-01 The rare gases used as scintillators are characterized by their short time of luminescence and by the linearity of their response as a function of the total energy imparted to the gas by the incident particle. It is possible with these scintillators, when associated with a fast response photomultiplier, to solve certain problems of nuclear physics demanding a linear detector with a very fast resolving time (a few nanoseconds). Two examples of the construction of this apparatus are described. The results obtained and future possibilities are briefly outlined. (author) [fr 18. Magneto-Optical and Time Resolved Spectroscopy in Narrow Gap MOVPE Grown Ferromagnetic Semiconductors Science.gov (United States) Meeker, M.; Magill, B.; Bhowmick, M.; Khodaparast, G. A.; McGill, S.; Feeser, C.; Wessels, B. W.; Saha, D.; Sanders, G. D.; Stanton, C. J. 2014-03-01 We report on magneto-optical at high magnetic fields and time resolved studies, that provide insight into the band structure, time scales, and the nature of the interactions in ferromagnetic InMnAs and InMnSb grown by MOVPE. By probing the dynamical behavior of the nonequilibrium carriers and spins, created by intense laser pulses, we gain valuable information about different scattering mechanisms and observe the sensitivity and tunability of the carrier and spin dynamics to the initial excitation energy. Theoretical calculations are performed using an 8 band k . model including non-parabolicity, band-mixing, and the interaction of magnetic Mn impurities with itinerant electrons and holes. Supported by: NSF-Career Award DMR-0846834, NSF-DMR-1305666, NSF-DMR-1105437, and Virginia Tech Institute for Critical Technology and Applied Sciences (ICTAS). 19. Nanometer-resolved chemical analyses of femtosecond laser-induced periodic surface structures on titanium Science.gov (United States) Kirner, Sabrina V.; Wirth, Thomas; Sturm, Heinz; Krüger, Jörg; Bonse, Jörn 2017-09-01 The chemical characteristics of two different types of laser-induced periodic surface structures (LIPSS), so-called high and low spatial frequency LIPSS (HSFL and LSFL), formed upon irradiation of titanium surfaces by multiple femtosecond laser pulses in air (30 fs, 790 nm, 1 kHz), are analyzed by various optical and electron beam based surface analytical techniques, including micro-Raman spectroscopy, energy dispersive X-ray analysis, X-ray photoelectron spectroscopy, and Auger electron spectroscopy. The latter method was employed in a high-resolution mode being capable of spatially resolving even the smallest HSFL structures featuring spatial periods below 100 nm. In combination with an ion sputtering technique, depths-resolved chemical information of superficial oxidation processes was obtained, revealing characteristic differences between the two different types of LIPSS. Our results indicate that a few tens of nanometer shallow HSFL are formed on top of a ˜150 nm thick graded superficial oxide layer without sharp interfaces, consisting of amorphous TiO2 and partially crystallized Ti2O3. The larger LSFL structures with periods close to the irradiation wavelength originate from the laser-interaction with metallic titanium. They are covered by a ˜200 nm thick amorphous oxide layer, which consists mainly of TiO2 (at the surface) and other titanium oxide species of lower oxidation states underneath. 20. ESCo for mutual benefit and free energy saving. White paper 1. Including five cases and tips from experts; ESCo voor wederzijds voordeel en gratis energiebesparing. White paper 1. Inclusief vijf cases en experttips Energy Technology Data Exchange (ETDEWEB) NONE 2013-01-15 This white paper provides insight into the operation, options and restrictions of ESCo's (Energy Service Companies). The different variants of a relatively simple ESCo-product to an advanced ESCo-project are described and illustrated with examples from practice. Tips from experts can help with the assessment whether entering into a partnership with an ESCo is attractive enough [Dutch] Deze whitepaper geeft inzicht in de werking, mogelijkheden en beperkingen van ESCo's (Energy Service Companies). De verschillende varianten, van een relatief eenvoudige product-ESCo tot een geavanceerde project-ESCo worden beschreven en geillustreerd aan de hand van praktijkvoorbeelden. Tips van expert helpen met de inschatting of het aangaan van een samenwerkingsverband met een ESCo aantrekkelijk is. 1. New Instruments for Spectrally-Resolved Solar Soft X-ray Observations from CubeSats, and Larger Missions Science.gov (United States) Caspi, A.; Shih, A.; Warren, H. P.; DeForest, C. E.; Woods, T. N. 2015-12-01 Solar soft X-ray (SXR) observations provide important diagnostics of plasma heating, during solar flares and quiescent times. Spectrally- and temporally-resolved measurements are crucial for understanding the dynamics and evolution of these energetic processes; spatially-resolved measurements are critical for understanding energy transport. A better understanding of the thermal plasma informs our interpretation of hard X-ray (HXR) observations of nonthermal particles, improving our understanding of the relationships between particle acceleration, plasma heating, and the underlying release of magnetic energy during reconnection. We introduce a new proposed mission, the CubeSat Imaging X-ray Solar Spectrometer (CubIXSS), to measure spectrally- and spatially-resolved SXRs from the quiescent and flaring Sun from a 6U CubeSat platform in low-Earth orbit during a nominal 1-year mission. CubIXSS includes the Amptek X123-SDD silicon drift detector, a low-noise, commercial off-the-shelf (COTS) instrument enabling solar SXR spectroscopy from ~0.5 to ~30 keV with ~0.15 keV FWHM spectral resolution with low power, mass, and volume requirements. An X123-CdTe cadmium-telluride detector is also included for ~5-100 keV HXR spectroscopy with ~0.5-1 keV FWHM resolution. CubIXSS also includes a novel spectro-spatial imager -- the first ever solar imager on a CubeSat -- utilizing a pinhole aperture and X-ray transmission diffraction grating to provide full-Sun imaging from ~0.1 to ~10 keV, with ~25 arcsec and ~0.1 Å FWHM spatial and spectral resolutions, respectively. We discuss scaled versions of these instruments, with greater sensitivity and dynamic range, and significantly improved spectral and spatial resolutions for the imager, for deployment on larger platforms such as Small Explorer missions. 2. High-Energy Compton Scattering Light Sources CERN Document Server Hartemann, Fred V; Barty, C; Crane, John; Gibson, David J; Hartouni, E P; Tremaine, Aaron M 2005-01-01 No monochromatic, high-brightness, tunable light sources currently exist above 100 keV. Important applications that would benefit from such new hard x-ray sources include: nuclear resonance fluorescence spectroscopy, time-resolved positron annihilation spectroscopy, and MeV flash radiography. The peak brightness of Compton scattering light sources is derived for head-on collisions and found to scale with the electron beam brightness and the drive laser pulse energy. This gamma 2 3. Global geothermal energy scenario International Nuclear Information System (INIS) Singh, S.K.; Singh, A.; Pandey, G.N. 1993-01-01 To resolve the energy crisis efforts have been made in exploring and utilizing nonconventional energy resources since last few decades. Geothermal energy is one such energy resource. Fossil fuels are the earth's energy capital like money deposited in bank years ago. The energy to build this energy came mainly from the sun. Steam geysers and hot water springs are other manifestations of geothermal energy. Most of the 17 countries that today harness geothermal energy have simply tapped such resources where they occur. (author). 8 refs., 4 tabs., 1 fig 4. The conforming brain and deontological resolve. Science.gov (United States) Pincus, Melanie; LaViers, Lisa; Prietula, Michael J; Berns, Gregory 2014-01-01 Our personal values are subject to forces of social influence. Deontological resolve captures how strongly one relies on absolute rules of right and wrong in the representation of one's personal values and may predict willingness to modify one's values in the presence of social influence. Using fMRI, we found that a neurobiological metric for deontological resolve based on relative activity in the ventrolateral prefrontal cortex (VLPFC) during the passive processing of sacred values predicted individual differences in conformity. Individuals with stronger deontological resolve, as measured by greater VLPFC activity, displayed lower levels of conformity. We also tested whether responsiveness to social reward, as measured by ventral striatal activity during social feedback, predicted variability in conformist behavior across individuals but found no significant relationship. From these results we conclude that unwillingness to conform to others' values is associated with a strong neurobiological representation of social rules. 5. The conforming brain and deontological resolve. Directory of Open Access Journals (Sweden) Melanie Pincus Full Text Available Our personal values are subject to forces of social influence. Deontological resolve captures how strongly one relies on absolute rules of right and wrong in the representation of one's personal values and may predict willingness to modify one's values in the presence of social influence. Using fMRI, we found that a neurobiological metric for deontological resolve based on relative activity in the ventrolateral prefrontal cortex (VLPFC during the passive processing of sacred values predicted individual differences in conformity. Individuals with stronger deontological resolve, as measured by greater VLPFC activity, displayed lower levels of conformity. We also tested whether responsiveness to social reward, as measured by ventral striatal activity during social feedback, predicted variability in conformist behavior across individuals but found no significant relationship. From these results we conclude that unwillingness to conform to others' values is associated with a strong neurobiological representation of social rules. 6. Spectrally resolved longitudinal spatial coherence inteferometry Science.gov (United States) Woodard, Ethan R.; Kudenov, Michael W. 2017-05-01 We present an alternative imaging technique using spectrally resolved longitudinal spatial coherence interferometry to encode a scene's angular information onto the source's power spectrum. Fourier transformation of the spectrally resolved channeled spectrum output yields a measurement of the incident scene's angular spectrum. Theory for the spectrally resolved interferometric technique is detailed, demonstrating analogies to conventional Fourier transform spectroscopy. An experimental proof of concept system and results are presented using an angularly-dependent Fabry-Perot interferometer-based optical design for successful reconstruction of one-dimensional sinusoidal angular spectra. Discussion for a potential future application of the technique, in which polarization information is encoded onto the source's power spectrum is also given. 7. Depth-resolved fluorescence of biological tissue Science.gov (United States) Wu, Yicong; Xi, Peng; Cheung, Tak-Hong; Yim, So Fan; Yu, Mei-Yung; Qu, Jianan Y. 2005-06-01 The depth-resolved autofluorescence ofrabbit oral tissue, normal and dysplastic human ectocervical tissue within l20μm depth were investigated utilizing a confocal fluorescence spectroscopy with the excitations at 355nm and 457nm. From the topmost keratinizing layer of oral and ectocervical tissue, strong keratin fluorescence with the spectral characteristics similar to collagen was observed. The fluorescence signal from epithelial tissue between the keratinizing layer and stroma can be well resolved. Furthermore, NADH and FADfluorescence measured from the underlying non-keratinizing epithelial layer were strongly correlated to the tissue pathology. This study demonstrates that the depth-resolved fluorescence spectroscopy can reveal fine structural information on epithelial tissue and potentially provide more accurate diagnostic information for determining tissue pathology. 8. Imposing resolved turbulence in CFD simulations DEFF Research Database (Denmark) Gilling, L.; Sørensen, Niels N. 2011-01-01 In large‐eddy simulations, the inflow velocity field should contain resolved turbulence. This paper describes and analyzes two methods for imposing resolved turbulence in the interior of the domain in Computational Fluid Dynamics simulations. The intended application of the methods is to impose...... resolved turbulence immediately upstream of the region or structure of interest. Comparing to the alternative of imposing the turbulence at the inlet, there is a large potential to reduce the computational cost of the simulation by reducing the total number of cells. The reduction comes from a lower demand...... of modifying the source terms. None of the two methods can impose synthetic turbulence with good results, but it is shown that by running the turbulence field through a short precursor simulation, very good results are obtained. Copyright © 2011 John Wiley & Sons, Ltd.... 9. Energies; Energies Energy Technology Data Exchange (ETDEWEB) NONE 2003-07-01 In the framework of the National Debate on the energies in a context of a sustainable development some associations for the environment organized a debate on the nuclear interest facing the renewable energies. The first part presents the nuclear energy as a possible solution to fight against the greenhouse effect and the associated problem of the wastes management. The second part gives information on the solar energy and the possibilities of heat and electric power production. A presentation of the FEE (French wind power association) on the situation and the development of the wind power in France, is also provided. (A.L.B.) 10. Resolving Ethical Dilemmas in Financial Audit OpenAIRE Professor PhD Turlea Eugeniu; PhD Student Mocanu Mihaela 2010-01-01 Resolving ethical dilemmas is a difficult endeavor in any field and financial auditing makes no exception. Ethical dilemmas are complex situations which derive from a conflict and in which a decision among several alternatives is needed. Ethical dilemmas are common in the work of the financial auditor, whose mission is to serve the interests of the public at large, not those of the auditee’s managers who mandate him/her. The objective of the present paper is to offer support in resolving ethi... 11. De novo assembly of a haplotype-resolved human genome. Science.gov (United States) Cao, Hongzhi; Wu, Honglong; Luo, Ruibang; Huang, Shujia; Sun, Yuhui; Tong, Xin; Xie, Yinlong; Liu, Binghang; Yang, Hailong; Zheng, Hancheng; Li, Jian; Li, Bo; Wang, Yu; Yang, Fang; Sun, Peng; Liu, Siyang; Gao, Peng; Huang, Haodong; Sun, Jing; Chen, Dan; He, Guangzhu; Huang, Weihua; Huang, Zheng; Li, Yue; Tellier, Laurent C A M; Liu, Xiao; Feng, Qiang; Xu, Xun; Zhang, Xiuqing; Bolund, Lars; Krogh, Anders; Kristiansen, Karsten; Drmanac, Radoje; Drmanac, Snezana; Nielsen, Rasmus; Li, Songgang; Wang, Jian; Yang, Huanming; Li, Yingrui; Wong, Gane Ka-Shu; Wang, Jun 2015-06-01 The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should aid in translating genotypes to phenotypes for the development of personalized medicine. 12. De novo assembly of a haplotype-resolved human genome DEFF Research Database (Denmark) Cao, Hongzhi; Wu, Honglong; Luo, Ruibang 2015-01-01 The human genome is diploid, and knowledge of the variants on each chromosome is important for the interpretation of genomic information. Here we report the assembly of a haplotype-resolved diploid genome without using a reference genome. Our pipeline relies on fosmid pooling together with whole-genome...... of novel coding sequences that could not be aligned to the human reference genome, which include at least six predicted genes. This haplotype-resolved genome represents the most complete de novo human genome assembly to date. Application of our approach to identify individual haplotype differences should...... shotgun strategies, based solely on next-generation sequencing and hierarchical assembly methods. We applied our sequencing method to the genome of an Asian individual and generated a 5.15-Gb assembled genome with a haplotype N50 of 484 kb. Our analysis identified previously undetected indels and 7.49 Mb... 13. Suggested technical scheme to help resolve regulatory issues Energy Technology Data Exchange (ETDEWEB) Harvey, T. 1978-07-01 A management-planning model envisioned as a useful tool for planning and guiding the development of a nuclear waste repository data base is described. It incorporates the technical assessment goals and objectives of the US Nuclear Regulatory Commission, and it provides a strategy for reaching them. The model strategy includes provisions for the breadth, timeliness, and defensibility of its predictions. Consideration is given to observational data, its structure, and future refinements. The structure of the data is consistent with the needs of a systems model whose structure is proposed to resolve questions about repository safety. Uncertainties are categorized as an aid in defining and resolving technical issues. The model provides a framework for ultimately exposing all the sensitive and controversial factors. Some quantitative aspects of data acquisition are presented. 12 figures. 14. Innovative nuclear energy systems roadmap International Nuclear Information System (INIS) 2007-12-01 Developing nuclear energy that is sustainable, safe, has little waste by-product, and cannot be proliferated is an extremely vital and pressing issue. To resolve the four issues through free thinking and overall vision, research activities of 'innovative nuclear energy systems' and 'innovative separation and transmutation' started as a unique 21st Century COE Program for nuclear energy called the Innovative Nuclear Energy Systems for Sustainable Development of the World, COE-INES. 'Innovative nuclear energy systems' include research on CANDLE burn-up reactors, lead-cooled fast reactors and using nuclear energy in heat energy. 'Innovative separation and transmutation' include research on using chemical microchips to efficiently separate TRU waste to MA, burning or destroying waste products, or transmuting plutonium and other nuclear materials. Research on 'nuclear technology and society' and 'education' was also added in order for nuclear energy to be accepted into society. COE-INES was a five-year program ending in 2007. But some activities should be continued and this roadmap detailed them as a rough guide focusing inventions and discoveries. This technology roadmap was created for social acceptance and should be flexible to respond to changing times and conditions. (T. Tanaka) 15. Spaced resolved analysis of suprathermal electrons in dense plasma Directory of Open Access Journals (Sweden) Moinard A. 2013-11-01 Full Text Available The investigation of the hot electron fraction is a crucial topic for high energy density laser driven plasmas: first, energy losses and radiative properties depend strongly on the hot electron fraction and, second, in ICF hohlraums suprathermal electrons preheat the D-T-capsule and seriously reduce the fusion performance. In the present work we present our first experimental and theoretical studies to analyze single shot space resolved hot electron fractions inside dense plasmas via optically thin X-ray line transitions from autoionizing states. The benchmark experiment has been carried out at an X-pinch in order to create a dense, localized plasma with a well defined symmetry axis of hot electron propagation. Simultaneous high spatial and spectral resolution in the X-ray spectral range has been obtained with a spherically bent quartz Bragg crystal. The high performance of the X-ray diagnostics allowed to identify space resolved hot electron fractions via the X-ray spectral distribution of multiple excited states. 16. Time-resolved and position-resolved X-ray spectrometry with a pixelated detector Energy Technology Data Exchange (ETDEWEB) Sievers, Peter 2012-12-07 show a good agreement. Up to now the measurements of impinging spectra with a Timepix detector have been performed in radiation fields with a relatively high fluence. To cope with the requirement of measuring in radiation fields with a low fluence, there had to be changes in the method of analysis compared to those performed formerly. An important improvement in this context was the employment of the Bayesian deconvolution method. The spectra reconstructed with this method were then compared to the results of two different and established detection systems. Firstly, the shape of the deconvolved spectrum was compared to the one measured with a hpGe detector. Secondly, the calculated value of the kerma rate was compared to the one measured with an ionization chamber. This gave an estimate on the correctness of the absolute number of photons. Both comparisons have shown a good agreement and thus I was able to validate that the method delivers precise results. Compared to the formerly used spectrum-stripping method the Bayesian deconvolution turned out to be very stable and reliable. This robustness of the deconvolution method and the development of a pixel-by-pixel energy calibration were the keys towards position-resolved spectrometry. With such a precise energy calibration the energy resolution was enhanced by up to 45%. This improved accuracy in the measurement has been very demanding on the improvements of the simulation of the response matrix needed for deconvolution. Both this enhanced simulation and a pixel-by-pixel calibrated detector opened the possibility of measuring the anode heel effect. Not only the relative angular dependency of the spectrum emitted but also the change in the absolute photon fluence were measured. Furthermore, it is possible to even use small ROIs down to 4x4 pixels to evaluate a spectrum. This was then applied for the spectrometry of small focal spots of a miniature X-ray source used in therapeutics. Furthermore, the robustness and the 17. Time-resolved and position-resolved X-ray spectrometry with a pixelated detector International Nuclear Information System (INIS) Sievers, Peter 2012-01-01 show a good agreement. Up to now the measurements of impinging spectra with a Timepix detector have been performed in radiation fields with a relatively high fluence. To cope with the requirement of measuring in radiation fields with a low fluence, there had to be changes in the method of analysis compared to those performed formerly. An important improvement in this context was the employment of the Bayesian deconvolution method. The spectra reconstructed with this method were then compared to the results of two different and established detection systems. Firstly, the shape of the deconvolved spectrum was compared to the one measured with a hpGe detector. Secondly, the calculated value of the kerma rate was compared to the one measured with an ionization chamber. This gave an estimate on the correctness of the absolute number of photons. Both comparisons have shown a good agreement and thus I was able to validate that the method delivers precise results. Compared to the formerly used spectrum-stripping method the Bayesian deconvolution turned out to be very stable and reliable. This robustness of the deconvolution method and the development of a pixel-by-pixel energy calibration were the keys towards position-resolved spectrometry. With such a precise energy calibration the energy resolution was enhanced by up to 45%. This improved accuracy in the measurement has been very demanding on the improvements of the simulation of the response matrix needed for deconvolution. Both this enhanced simulation and a pixel-by-pixel calibrated detector opened the possibility of measuring the anode heel effect. Not only the relative angular dependency of the spectrum emitted but also the change in the absolute photon fluence were measured. Furthermore, it is possible to even use small ROIs down to 4x4 pixels to evaluate a spectrum. This was then applied for the spectrometry of small focal spots of a miniature X-ray source used in therapeutics. Furthermore, the robustness and the 18. Electronic properties of linear carbon chains: Resolving the controversy Science.gov (United States) Al-Backri, Amaal; Zólyomi, Viktor; Lambert, Colin J. 2014-03-01 Literature values for the energy gap of long one-dimensional carbon chains vary from as little as 0.2 eV to more than 4 eV. To resolve this discrepancy, we use the GW many-body approach to calculate the band gap Eg of an infinite carbon chain. We also compute the energy dependence of the attenuation coefficient β governing the decay with chain length of the electrical conductance of long chains and compare this with recent experimental measurements of the single-molecule conductance of end-capped carbon chains. For long chains, we find Eg = 2.16 eV and an upper bound for β of 0.21 Å-1. 19. Electronic properties of linear carbon chains: Resolving the controversy International Nuclear Information System (INIS) Al-Backri, Amaal; Zólyomi, Viktor; Lambert, Colin J. 2014-01-01 Literature values for the energy gap of long one-dimensional carbon chains vary from as little as 0.2 eV to more than 4 eV. To resolve this discrepancy, we use the GW many-body approach to calculate the band gap E g of an infinite carbon chain. We also compute the energy dependence of the attenuation coefficient β governing the decay with chain length of the electrical conductance of long chains and compare this with recent experimental measurements of the single-molecule conductance of end-capped carbon chains. For long chains, we find E g = 2.16 eV and an upper bound for β of 0.21 Å −1 20. Approaches for Resolving Dynamic IP Addressing. Science.gov (United States) Foo, Schubert; Hui, Siu Cheung; Yip, See Wai; He, Yulan 1997-01-01 A problem with dynamic Internet protocol (IP) addressing arises when the Internet connection is through an Internet provider since the IP address is allocated only at connection time. This article examines a number of online and offline methods for resolving the problem. Suggests dynamic domain name system (DNS) and directory service look-up are… 1. The resolved stellar population of Leo A NARCIS (Netherlands) Tolstoy, E 1996-01-01 New observations of the resolved stellar population of the extremely metal-poor Magellanic dwarf irregular galaxy Leo A in Thuan-Gunn r, g, i, and narrowband Ha filters are presented. Using the recent Cepheid variable star distance determination to Leo A by Hoessel et al., we are able to create an 2. Generalized Darcy–Oseen resolvent problem Czech Academy of Sciences Publication Activity Database Medková, Dagmar; Ptashnyk, M.; Varnhorn, W. 2016-01-01 Roč. 39, č. 6 (2016), s. 1621-1630 ISSN 0170-4214 Institutional support: RVO:67985840 Keywords : Darcy -Oseen resolvent problem * semipermeable membrane * Brinkman- Darcy equations * fluid flow between free-fluid domains and porous media Subject RIV: BA - General Mathematics Impact factor: 1.017, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/mma.3872/abstract 3. Reverse Universal Resolving Algorithm and inverse driving DEFF Research Database (Denmark) Pécseli, Thomas 2012-01-01 Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new v... 4. Topoisomerase IB of Deinococcus radiodurans resolves guanine ... 2015-11-28 Nov 28, 2015 ... structure in vitro and it may be one such protein that could resolve G4 DNA under normal growth conditions in. D. radiodurans. [Kota S and Misra HS 2015 Topoisomerase IB of ..... 2004 Intracellular transcription of G-rich DNAs induces forma- tion of G-loops, novel structures containing G4 DNA. Genes. Dev. 5. Topoisomerase IB of Deinococcus radiodurans resolves guanine ... 2015-11-28 Nov 28, 2015 ... [Kota S and Misra HS 2015 Topoisomerase IB of Deinococcus radiodurans resolves guanine quadruplex DNA structures in vitro. J. Biosci. 40 833–843] ... known for its efficient DNA double strand break repair. (Zahradka et al. ..... These samples were analysed on 12% native PAGE in KCl buffer (a). For CD ... 6. Decomposition of time-resolved tomographic PIV NARCIS (Netherlands) Schmid, P.J.; Violato, D.; Scarano, F. 2012-01-01 An experimental study has been conducted on a transitional water jet at a Reynolds number of Re = 5,000. Flow fields have been obtained by means of time-resolved tomographic particle image velocimetry capturing all relevant spatial and temporal scales. The measured threedimensional flow fields have 7. Solar Energy. Science.gov (United States) Eaton, William W. Presented is the utilization of solar radiation as an energy resource principally for the production of electricity. Included are discussions of solar thermal conversion, photovoltic conversion, wind energy, and energy from ocean temperature differences. Future solar energy plans, the role of solar energy in plant and fossil fuel production, and… 8. Resolving deconvolution ambiguity in gene alternative splicing Directory of Open Access Journals (Sweden) Hubbell Earl 2009-08-01 Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data. 9. Energy sustainability through green energy CERN Document Server Sharma, Atul 2015-01-01 This book shares the latest developments and advances in materials and processes involved in the energy generation, transmission, distribution and storage. Chapters are written by researchers in the energy and materials field. Topics include, but are not limited to, energy from biomass, bio-gas and bio-fuels; solar, wind, geothermal, hydro power, wave energy; energy-transmission, distribution and storage; energy-efficient lighting buildings; energy sustainability; hydrogen and fuel cells; energy policy for new and renewable energy technologies and education for sustainable energy development 10. Optical model calculation for the unresolved/resolved resonance region of Fe-56 Energy Technology Data Exchange (ETDEWEB) Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Froehner, F.H. 1997-03-01 We have studied optical model fits to total neutron cross sections of structural materials using the accurate data base for {sup 56}Fe existing in the resolved and unresolved resonance region. Averages over resolved resonances were calculated with Lorentzian weighting in Reich-Moore (reduced R matrix) approximation. Starting from the best available optical potentials we found that adjustment of the real and imaginary well depths does not work satisfactorily with the conventional weak linear energy dependence of the well depths. If, however, the linear dependences are modified towards low energies, the average total cross sections can be fitted quite well, from the resolved resonance region up to 20 MeV and higher. (author) 11. Timepix3 as X-ray detector for time resolved synchrotron experiments Energy Technology Data Exchange (ETDEWEB) Yousef, Hazem, E-mail: hazem.yousef@diamond.ac.uk; Crevatin, Giulio; Gimenez, Eva N.; Horswell, Ian; Omar, David; Tartoni, Nicola 2017-02-11 The Timepix3 ASIC can be used very effectively for time resolved experiments at synchrotron facilities. We have carried out characterizations with the synchrotron beam in order to determine the time resolution and other characteristics such as the energy resolution, charge sharing and signals overlapping. The best time resolution achieved is 19 ns FWHM for 12 keV photons and 350 V bias voltage. The time resolution shows dependency on the photon energy as well as on the chip and acquisition parameters. - Highlights: • An estimate time resolution of the Timepix3 is produced based on the arrival time. • At high resolution, the time structure of the DLS synchrotron beam is resolved. • The arrival time information improves combining the charge split events. • The results enable performing a wide range of time resolved experiments. 12. Sandia energy titles Energy Technology Data Exchange (ETDEWEB) Gardner, J.L. (ed.) 1978-08-01 The bibliography of energy-related publications produced by Sandia authors is arranged in broad subject category order. Subjects included are conservation, drilling technology, energy (general), environment and safety, fossil energy, geothermal energy, nuclear energy, and solar energy. 13. Deflection evaluation using time-resolved radiography International Nuclear Information System (INIS) Fry, D.A.; Lucero, J.P. 1990-01-01 Time-resolved radiography is the creation of an x-ray image for which both the start-exposure and stop-exposure times are known with respect to the event under study. The combination of image and timing are used to derive information about the event. The authors have applied time-resolved radiography to evaluate motions of explosive-driven events. In the particular application discussed in this paper, the author's intent is to measure maximum deflections of the components involved. Exposures are made during the time just before to just after the event of interest occurs. A smear or blur of motion out to its furthest extent is recorded on the image. Comparison of the dynamic images with static images allows deflection measurements to be made 14. Time-resolved brightness measurements by streaking Science.gov (United States) Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E. 2018-03-01 Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets. 15. A spin-resolved photoemission study Stoner vs. spin-mixing behavior in the bulk magnetism of Gd: A spin-resolved photoemission study. K MAITI1,2,∗. , M C MALAGOLI2, A DALLMEYER2 and C CARBONE2,3. 1Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. 2Institut für Festkörperforschung, Forschungszentrum Jülich, ... 16. Energy resources CERN Document Server Simon, Andrew L 1975-01-01 Energy Resources mainly focuses on energy, including its definition, historical perspective, sources, utilization, and conservation. This text first explains what energy is and what its uses are. This book then explains coal, oil, and natural gas, which are some of the common energy sources used by various industries. Other energy sources such as wind, solar, geothermal, water, and nuclear energy sources are also tackled. This text also looks into fusion energy and techniques of energy conversion. This book concludes by explaining the energy allocation and utilization crisis. This publ 17. Time Resolved Shadowgraph Images of Silicon during Laser Ablation: Shockwaves and Particle Generation International Nuclear Information System (INIS) Liu, C Y; Mao, X L; Greif, R; Russo, R E 2007-01-01 Time resolved shadowgraph images were recorded of shockwaves and particle ejection from silicon during laser ablation. Particle ejection and expansion were correlated to an internal shockwave resonating between the shockwave front and the target surface. The number of particles ablated increased with laser energy and was related to the crater volume 18. Layer-resolved photoelectron diffraction: electron attenuation anisotropy in GaAs Czech Academy of Sciences Publication Activity Database Bartoš, Igor; Cukr, Miroslav; Jiříček, Petr 2012-01-01 Roč. 185, 5-7 (2012), 184-187 ISSN 0368-2048 Grant - others:AVČR(CZ) Praemium Academiae Institutional research plan: CEZ:AV0Z10100521 Keywords : low-energy electron attenuation in GaAs * layer-resolved photoelectron diffraction * synchrotron radiation Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.706, year: 2012 19. Time-resolved measurement of a self-amplified free-electron laser International Nuclear Information System (INIS) We report on a time-resolved measurement of self-amplified spontaneous emission free-electron laser (FEL) pulses. We observed that the spikes in such FEL pulses have an intrinsic positive chirp and the energy chirp in the electron bunch mapped directly into the FEL output. The measurement also provides rich information on the statistics of the FEL pulses 20. Time Resolved Shadowgraph Images of Silicon during Laser Ablation:Shockwaves and Particle Generation Energy Technology Data Exchange (ETDEWEB) Liu, C.Y.; Mao, X.L.; Greif, R.; Russo, R.E. 2006-05-06 Time resolved shadowgraph images were recorded of shockwaves and particle ejection from silicon during laser ablation. Particle ejection and expansion were correlated to an internal shockwave resonating between the shockwave front and the target surface. The number of particles ablated increased with laser energy and was related to the crater volume. 1. Photoelectron spectroscopy at a free-electron laser. Investigation of space-charge effects in angle-resolved and core-level spectroscopy and realizaton of a time-resolved core-level photoemission experiment International Nuclear Information System (INIS) Marczynski-Buehlow, Martin 2012-01-01 The free-electron laser (FEL) in Hamburg (FLASH) is a very interesting light source with which to perform photoelectron spectroscopy (PES) experiments. Its special characteristics include highly intense photon pulses (up to 100 J/pulse), a photon energy range of 30 eV to 1500 eV, transverse coherence as well as pulse durations of some ten femtoseconds. Especially in terms of time-resolved PES (TRPES), the deeper lying core levels can be reached with photon energies up to 1500 eV with acceptable intensity now and, therefore, element-specific, time-resolved core-level PES (XPS) is feasible at FLASH. During the work of this thesis various experimental setups were constructed in order to realize angle-resolved (ARPES), core-level (XPS) as well as time-resolved PES experiments at the plane grating monochromator beamline PG2 at FLASH. Existing as well as newly developed systems for online monitoring of FEL pulse intensities and generating spatial and temporal overlap of FEL and optical laser pulses for time-resolved experiments are successfully integrated into the experimental setup for PES. In order to understand space-charge effects (SCEs) in PES and, therefore, being able to handle those effects in future experiments using highly intense and pulsed photon sources, the origin of energetic broadenings and shifts in photoelectron spectra are studied by means of a molecular dynamic N-body simulation using a modified Treecode Algorithm for sufficiently fast and accurate calculations. It turned out that the most influencing parameter is the ''linear electron density'' - the ratio of the number of photoelectrons to the diameter of the illuminated spot on the sample. Furthermore, the simulations could reproduce the observations described in the literature fairly well. Some rules of thumb for XPS and ARPES measurements could be deduced from the simulations. Experimentally, SCEs are investigated by means of ARPES as well as XPS measurements as a function of FEL pulse 2. Photoelectron spectroscopy at a free-electron laser. Investigation of space-charge effects in angle-resolved and core-level spectroscopy and realizaton of a time-resolved core-level photoemission experiment Energy Technology Data Exchange (ETDEWEB) Marczynski-Buehlow, Martin 2012-01-30 The free-electron laser (FEL) in Hamburg (FLASH) is a very interesting light source with which to perform photoelectron spectroscopy (PES) experiments. Its special characteristics include highly intense photon pulses (up to 100 J/pulse), a photon energy range of 30 eV to 1500 eV, transverse coherence as well as pulse durations of some ten femtoseconds. Especially in terms of time-resolved PES (TRPES), the deeper lying core levels can be reached with photon energies up to 1500 eV with acceptable intensity now and, therefore, element-specific, time-resolved core-level PES (XPS) is feasible at FLASH. During the work of this thesis various experimental setups were constructed in order to realize angle-resolved (ARPES), core-level (XPS) as well as time-resolved PES experiments at the plane grating monochromator beamline PG2 at FLASH. Existing as well as newly developed systems for online monitoring of FEL pulse intensities and generating spatial and temporal overlap of FEL and optical laser pulses for time-resolved experiments are successfully integrated into the experimental setup for PES. In order to understand space-charge effects (SCEs) in PES and, therefore, being able to handle those effects in future experiments using highly intense and pulsed photon sources, the origin of energetic broadenings and shifts in photoelectron spectra are studied by means of a molecular dynamic N-body simulation using a modified Treecode Algorithm for sufficiently fast and accurate calculations. It turned out that the most influencing parameter is the ''linear electron density'' - the ratio of the number of photoelectrons to the diameter of the illuminated spot on the sample. Furthermore, the simulations could reproduce the observations described in the literature fairly well. Some rules of thumb for XPS and ARPES measurements could be deduced from the simulations. Experimentally, SCEs are investigated by means of ARPES as well as XPS measurements as a function of 3. Becoming homeless, being homeless, and resolving homelessness among women. Science.gov (United States) Finfgeld-Connett, Deborah 2010-07-01 The purpose of this investigation was to more comprehensively articulate the experiences of homeless women and make evidence-based inferences regarding optimal social services. This study was conducted using qualitative meta-synthesis methods. As youth, homeless women experience challenging circumstances that leave them ill-prepared to prevent and resolve homelessness in adulthood. Resolution of homelessness occurs in iterative stages: crisis, assessment, and sustained action. To enhance forward progression through these stages, nurses are encouraged to promote empowerment in concordance with the Transtheoretical and Harm Reduction Models. Services that are highly valued include physical and mental health care and child care assistance. 4. Exploring cancer metabolism using stable isotope-resolved metabolomics (SIRM). Science.gov (United States) Bruntz, Ronald C; Lane, Andrew N; Higashi, Richard M; Fan, Teresa W-M 2017-07-14 Metabolic reprogramming is a hallmark of cancer. The changes in metabolism are adaptive to permit proliferation, survival, and eventually metastasis in a harsh environment. Stable isotope-resolved metabolomics (SIRM) is an approach that uses advanced approaches of NMR and mass spectrometry to analyze the fate of individual atoms from stable isotope-enriched precursors to products to deduce metabolic pathways and networks. The approach can be applied to a wide range of biological systems, including human subjects. This review focuses on the applications of SIRM to cancer metabolism and its use in understanding drug actions. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc. 5. Time Resolved Spectroscopy of MOVPE Grown Narrow Gap Ferromagnetic Semiconductors Science.gov (United States) Bhowmick, Mithun; Merritt, Travis; Khodaparast, Giti A.; Feeser, Caitlin; Wessels, Bruce W.; McGill, Stephen; Saha, D.; Pan, X.; Sanders, G. D.; Stanton, C. J. 2011-12-01 We report on time resolved differential transmission experiments to provide insight into both the time scales and the nature of the microscopic interactions and carrier dynamics in MOVPE grown ferromagnetic InMnAs and InMnSb. Theoretical calculations of the electronic structure for InMnAs are performed using an 8 band kṡp model which includes non-parabolicity of the conduction bands; strong band-mixing of the valence bands; as well as coupling of Mn impurities to the electrons and holes. Our preliminary theoretical results explain the sign change in the differential transmission signal as a function of probe wavelength. 6. Angle-resolved photoemission spectroscopy with quantum gas microscopes Science.gov (United States) Bohrdt, A.; Greif, D.; Demler, E.; Knap, M.; Grusdt, F. 2018-03-01 Quantum gas microscopes are a promising tool to study interacting quantum many-body systems and bridge the gap between theoretical models and real materials. So far, they were limited to measurements of instantaneous correlation functions of the form 〈O ̂(t ) 〉 , even though extensions to frequency-resolved response functions 〈O ̂(t ) O ̂(0 ) 〉 would provide important information about the elementary excitations in a many-body system. For example, single-particle spectral functions, which are usually measured using photoemission experiments in electron systems, contain direct information about fractionalization and the quasiparticle excitation spectrum. Here, we propose a measurement scheme to experimentally access the momentum and energy-resolved spectral function in a quantum gas microscope with currently available techniques. As an example for possible applications, we numerically calculate the spectrum of a single hole excitation in one-dimensional t -J models with isotropic and anisotropic antiferromagnetic couplings. A sharp asymmetry in the distribution of spectral weight appears when a hole is created in an isotropic Heisenberg spin chain. This effect slowly vanishes for anisotropic spin interactions and disappears completely in the case of pure Ising interactions. The asymmetry strongly depends on the total magnetization of the spin chain, which can be tuned in experiments with quantum gas microscopes. An intuitive picture for the observed behavior is provided by a slave-fermion mean-field theory. The key properties of the spectra are visible at currently accessible temperatures. 7. Time-Resolved Hard X-Ray Spectrometer International Nuclear Information System (INIS) Kenneth Moya; Ian McKennaa; Thomas Keenana; Michael Cuneob 2007-01-01 Wired array studies are being conducted at the SNL Z accelerator to maximize the x-ray generation for inertial confinement fusion targets and high energy density physics experiments. An integral component of these studies is the characterization of the time-resolved spectral content of the x-rays. Due to potential spatial anisotropy in the emitted radiation, it is also critical to diagnose the time-evolved spectral content in a space-resolved manner. To accomplish these two measurement goals, we developed an x-ray spectrometer using a set of high-speed detectors (silicon PIN diodes) with a collimated field-of-view that converged on a 1-cm-diameter spot at the pinch axis. Spectral discrimination is achieved by placing high Z absorbers in front of these detectors. We built two spectrometers to permit simultaneous different angular views of the emitted radiation. Spectral data have been acquired from recent Z shots for the radial and polar views. UNSPEC1 has been adapted to analyze and unfold the measured data to reconstruct the x-ray spectrum. The unfold operator code, UFO2, is being adapted for a more comprehensive spectral unfolding treatment 8. Comparative frequency-resolved photoconductivity studies of amorphous semiconductors Energy Technology Data Exchange (ETDEWEB) Kaplan, R. [Department of Secondary Science and Mathematics Education, University of Mersin, Yenisehir Campus, 33169 Mersin (Turkey) 2005-02-01 Comparative frequency-resolved photoconductivity measurements in amorphous (a-) semiconductors, such as a-Si:H p-i-n junction, a-SiGe:H and a-chalcogenides (a-Se, a-As{sub 2}Se{sub 3}, a-As{sub 2}Te{sub 3}, a-SeTe, a-As{sub 2}S{sub 3}, etc.) are reported. In particular, photoconductivity lifetimes as a function of light intensity and temperature were determined by using the quadrature frequency-resolved spectroscopy method. The activation energies from the temperature-dependent lifetime and photocurrent were determined and compared in different materials. The exponent n in the power-law relationship (I{sub ph}KG{sup n}) between generating flux and photocurrent was also obtained at different excitation wavelengths. The results were compared with the predictions of multiple-trapping (MT) and distant-pair (DP) models developed for photoconductivity of a-semiconductors at high and low temperatures, respectively. 9. Fully resolved simulations of expansion waves propagating into particle beds Science.gov (United States) Marjanovic, Goran; Hackl, Jason; Annamalai, Subramanian; Jackson, Thomas; Balachandar, S. 2017-11-01 There is a tremendous amount of research that has been done on compression waves and shock waves moving over particles but very little concerning expansion waves. Using 3-D direct numerical simulations, this study will explore expansion waves propagating into fully resolved particle beds of varying volume fractions and geometric arrangements. The objectives of these simulations are as follows: 1) To fully resolve all (1-way coupled) forces on the particles in a time varying flow and 2) to verify state-of-the-art drag models for such complex flows. We will explore a range of volume fractions, from very low ones that are similar to single particle flows, to higher ones where nozzling effects are observed between neighboring particles. Further, we will explore two geometric arrangements: body centered cubic and face centered cubic. We will quantify the effects that volume fraction and geometric arrangement plays on the drag forces and flow fields experienced by the particles. These results will then be compared to theoretical predictions from a model based on the generalized Faxen's theorem. This work was supported in part by the U.S. Department of Energy under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378. 10. What Supports an Aeroplane? Force, Momentum, Energy and Power in Flight Science.gov (United States) Robertson, David 2014-01-01 Some apparently confusing aspects of Newton's laws as applied to an aircraft in normal horizontal flight are neatly resolved by a careful analysis of force, momentum, energy and power. A number of related phenomena are explained at the same time, including the lift and induced drag coefficients, used empirically in the aviation industry. 11. Resolving the evolutionary relationships of molluscs with phylogenomic tools. Science.gov (United States) Smith, Stephen A; Wilson, Nerida G; Goetz, Freya E; Feehery, Caitlin; Andrade, Sónia C S; Rouse, Greg W; Giribet, Gonzalo; Dunn, Casey W 2011-10-26 Molluscs (snails, octopuses, clams and their relatives) have a great disparity of body plans and, among the animals, only arthropods surpass them in species number. This diversity has made Mollusca one of the best-studied groups of animals, yet their evolutionary relationships remain poorly resolved. Open questions have important implications for the origin of Mollusca and for morphological evolution within the group. These questions include whether the shell-less, vermiform aplacophoran molluscs diverged before the origin of the shelled molluscs (Conchifera) or lost their shells secondarily. Monoplacophorans were not included in molecular studies until recently, when it was proposed that they constitute a clade named Serialia together with Polyplacophora (chitons), reflecting the serial repetition of body organs in both groups. Attempts to understand the early evolution of molluscs become even more complex when considering the large diversity of Cambrian fossils. These can have multiple dorsal shell plates and sclerites or can be shell-less but with a typical molluscan radula and serially repeated gills. To better resolve the relationships among molluscs, we generated transcriptome data for 15 species that, in combination with existing data, represent for the first time all major molluscan groups. We analysed multiple data sets containing up to 216,402 sites and 1,185 gene regions using multiple models and methods. Our results support the clade Aculifera, containing the three molluscan groups with spicules but without true shells, and they support the monophyly of Conchifera. Monoplacophora is not the sister group to other Conchifera but to Cephalopoda. Strong support is found for a clade that comprises Scaphopoda (tusk shells), Gastropoda and Bivalvia, with most analyses placing Scaphopoda and Gastropoda as sister groups. This well-resolved tree will constitute a framework for further studies of mollusc evolution, development and anatomy. 12. Electric Power Monthly, August 1990. [Glossary included Energy Technology Data Exchange (ETDEWEB) 1990-11-29 The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included. Energy Technology Data Exchange (ETDEWEB) NONE 2011-07-01 Increased focus has been placed on the issues of energy access and energy poverty over the last number of years, most notably indicated by the United Nations (UN) declaring 2012 as the 'International Year of Sustainable Energy for All'. Although attention in these topics has increased, incorrect assumptions and misunderstandings still arise in both the literature and dialogues. Access to energy does not only include electricity, does not only include cook stoves, but must include access to all types of energy that form the overall energy system. This paper chooses to examine this energy system using a typology that breaks it into 3 primary energy subsystems: heat energy, electricity and transportation. Describing the global energy system using these three subsystems provides a way to articulate the differences and similarities for each system's required investments needs by the private and public sectors. 14. Examining Electron-Boson Coupling Using Time-Resolved Spectroscopy Energy Technology Data Exchange (ETDEWEB) Sentef, Michael; Kemper, Alexander F.; Moritz, Brian; Freericks, James K.; Shen, Zhi-Xun; Devereaux, Thomas P. 2013-12-26 Nonequilibrium pump-probe time-domain spectroscopies can become an important tool to disentangle degrees of freedom whose coupling leads to broad structures in the frequency domain. Here, using the time-resolved solution of a model photoexcited electron-phonon system, we show that the relaxational dynamics are directly governed by the equilibrium self-energy so that the phonon frequency sets a window for “slow” versus “fast” recovery. The overall temporal structure of this relaxation spectroscopy allows for a reliable and quantitative extraction of the electron-phonon coupling strength without requiring an effective temperature model or making strong assumptions about the underlying bare electronic band dispersion. 15. Chemistry resolved kinetic flow modeling of TATB based explosives Science.gov (United States) Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark 2012-03-01 Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502. 16. The time-resolved photoelectron spectrum of toluene using a perturbation theory approach Energy Technology Data Exchange (ETDEWEB) Richings, Gareth W.; Worth, Graham A., E-mail: g.a.worth@bham.ac.uk [School of Chemistry, University of Birmingham, Edgbaston, Birmingham, B15 2TT (United Kingdom) 2014-12-28 A theoretical study of the intra-molecular vibrational-energy redistribution of toluene using time-resolved photo-electron spectra calculated using nuclear quantum dynamics and a simple, two-mode model is presented. Calculations have been carried out using the multi-configuration time-dependent Hartree method, using three levels of approximation for the calculation of the spectra. The first is a full quantum dynamics simulation with a discretisation of the continuum wavefunction of the ejected electron, whilst the second uses first-order perturbation theory to calculate the wavefunction of the ion. Both methods rely on the explicit inclusion of both the pump and probe laser pulses. The third method includes only the pump pulse and generates the photo-electron spectrum by projection of the pumped wavepacket onto the ion potential energy surface, followed by evaluation of the Fourier transform of the autocorrelation function of the subsequently propagated wavepacket. The calculations performed have been used to study the periodic population flow between the 6a and 10b16b modes in the S{sub 1} excited state, and compared to recent experimental data. We obtain results in excellent agreement with the experiment and note the efficiency of the perturbation method. 17. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry Science.gov (United States) Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C. 2014-01-01 A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367 18. Geothermal Energy. Science.gov (United States) Nemzer, Marilyn; Page, Deborah This curriculum unit describes geothermal energy in the context of the world's energy needs. It addresses renewable and nonrenewable energy sources with an in-depth study of geothermal energy--its geology, its history, and its many uses. Included are integrated activities involving science, as well as math, social studies, and language arts.… 19. 2002 energy statistics International Nuclear Information System (INIS) 2003-01-01 This report has 12 chapters. The first chapter includes world energy reserves, the second chapter is about world primary energy production and consumption condition. Other chapters include; world energy prices, energy reserves in Turkey, Turkey primary energy production and consumption condition, Turkey energy balance tables, Turkey primary energy reserves production, consumption, imports and exports conditions, sectoral energy consumptions, Turkey secondary electricity plants, Turkey energy investments, Turkey energy prices.This report gives world and Turkey statistics on energy 20. Spectra-resolved technique of a sensitive time-resolved fluorescence immunoassay instrument Science.gov (United States) Guo, Zhouyi; Tian, Zhen; Jia, Yali 2004-07-01 The lanthanide trivalence ion and its chelates are used for marking substance in time-resolved fluorescence immunoassay (TRFIA), marking the protein, hormone, antibody, nucleic acid probe or biologica alive cell, to measure the concentration of the analysis substance inside the reaction system with time-resolved fluorometry after the reaction system occurred, and attain the quantitative analysis's purpose. TRFIA has been become a kind of new and more sensitive measure method after radioisotope marking, enzymatic marking, chemiluminescence, electrochemiluminescence, it primarily is decided by the special physics and chemistry characteristic of lanthanide trivalence ion and its chelates. In this paper, the result of spectroscopic evaluation of europium trivalence ion and its chelate, and the principle of spectra-resolved technology and a sensitive time-resolved fluorescence immunoassay instrument made by ourselves are reported. In the set, a high frequency Xenon pulsed-light was adopted as exciting light, and two special filters was utilized according to spectra-resolved technique. Thus the influence of scattering light and short-lifetime fluorescence was removed. And the sensitivity is 10-12mol/L (when Eu3+ was used for marking substance), examination repeat is CV = 95% (p < 0.01). 1. Achieving patient satisfaction: resolving patient complaints. Science.gov (United States) Oxler, K F 1997-07-01 Patients demand to be active participants on and partners with the health care team to design their care regimen. Patients bring unique perceptions and expectations and use these to evaluate service quality and satisfaction. If customer satisfaction is not achieved and a patient complaint results, staff must have the skills to respond and launch a service recovery program. Service recovery, when done with style and panache, can retain loyal customers. Achieving patient satisfaction and resolving patient complaints require commitment from top leadership and commitment from providers to dedicate the time to understand their patients' needs. 2. Daylight time-resolved photographs of lightning. Science.gov (United States) Qrville, R E; Lala, G G; Idone, V P 1978-07-07 Lightning dart leaders and return strokes have been recorded in daylight with both good spatial resolution and good time resolution as part of the Thunder-storm Research International Program. The resulting time-resolved photographs are apparently equivalent to the best data obtained earlier only at night. Average two-dimensional return stroke velocities in four subsequent strokes between the ground and a height of 1400 meters were approximately 1.3 x 10(8) meters per second. The estimated systematic error is 10 to 15 percent. 3. Spatially Resolved Analysis of Bragg Selectivity Directory of Open Access Journals (Sweden) Tina Sabel 2015-11-01 Full Text Available This paper targets an inherent control of optical shrinkage in photosensitive polymers, contributing by means of spatially resolved analysis of volume holographic phase gratings. Point by point scanning of the local material response to the Gaussian intensity distribution of the recording beams is accomplished. Derived information on the local grating period and grating slant is evaluated by mapping of optical shrinkage in the lateral plane as well as through the depth of the layer. The influence of recording intensity, exposure duration and the material viscosity on the Bragg selectivity is investigated. 4. Full-Circle Resolver-to-Linear-Analog Converter Science.gov (United States) Alhorn, Dean C.; Smith, Dennis A.; Howard, David E. 2005-01-01 A circuit generates sinusoidal excitation signals for a shaft-angle resolver and, like the arctangent circuit described in the preceding article, generates an analog voltage proportional to the shaft angle. The disadvantages of the circuit described in the preceding article arise from the fact that it must be made from precise analog subcircuits, including a functional block capable of implementing some trigonometric identities; this circuitry tends to be expensive, sensitive to noise, and susceptible to errors caused by temperature-induced drifts and imprecise matching of gains and phases. These disadvantages are overcome by the design of the present circuit. The present circuit (see figure) includes an excitation circuit, which generates signals Ksin(Omega(t)) and Kcos(Omega(t)) [where K is an amplitude, Omega denotes 2(pi)x a carrier frequency (the design value of which is 10 kHz), and t denotes time]. These signals are applied to the excitation terminals of a shaft-angle resolver, causing the resolver to put out signals C sin(Omega(t)-Theta) and C cos(Omega(t)-Theta). The cosine excitation signal and the cosine resolver output signal are processed through inverting comparator circuits, which are configured to function as inverting squarers, to obtain logic-level or square-wave signals .-LL[cos(Omega(t)] and -LL[cos(Omega(t)-Theta)], respectively. These signals are fed as inputs to a block containing digital logic circuits that effectively measure the phase difference (which equals Theta between the two logic-level signals). The output of this block is a pulse-width-modulated signal, PWM(Theta), the time-averaged value of which ranges from 0 to 5 VDC as Theta ranges from .180 to +180deg. PWM(Theta) is fed to a block of amplifying and level-shifting circuitry, which converts the input PWM waveform to an output waveform that switches between precise reference voltage levels of +10 and -10 V. This waveform is processed by a two-pole, low-pass filter, which removes 5. Solar energy International Nuclear Information System (INIS) Anon. 1992-01-01 This chapter discusses the role solar energy may have in the energy future of the US. The topics discussed in the chapter include the solar resource, solar architecture including passive solar design and solar collectors, solar-thermal concentrating systems including parabolic troughs and dishes and central receivers, photovoltaic cells including photovoltaic systems for home use, and environmental, health and safety issues 6. Oregon: a guide to geothermal energy development. [Includes glossary Energy Technology Data Exchange (ETDEWEB) Justus, D.; Basescu, N.; Bloomquist, R.G.; Higbee, C.; Simpson, S. 1980-06-01 The following subjects are covered: Oregons' geothermal potential, exploration methods and costs, drilling, utilization methods, economic factors of direct use projects, and legal and institutional setting. (MHR) 7. Your solar energy home: including wind and methane applications National Research Council Canada - National Science Library Howell, Derek 1979-01-01 .... When a particular title is adopted or recommended for adoption for class use and the recommendation results in a sale of 12 or more copies, the inspection copy may be retained with our compliments. The Publishers will be pleased to receive suggestions for revised editions and new titles to be published in this important International Library.Other Pergamon ... 8. Healthcare Teams Neurodynamically Reorganize When Resolving Uncertainty Directory of Open Access Journals (Sweden) Ronald Stevens 2016-11-01 Full Text Available Research on the microscale neural dynamics of social interactions has yet to be translated into improvements in the assembly, training and evaluation of teams. This is partially due to the scale of neural involvements in team activities, spanning the millisecond oscillations in individual brains to the minutes/hours performance behaviors of the team. We have used intermediate neurodynamic representations to show that healthcare teams enter persistent (50–100 s neurodynamic states when they encounter and resolve uncertainty while managing simulated patients. Each of the second symbols was developed situating the electroencephalogram (EEG power of each team member in the contexts of those of other team members and the task. These representations were acquired from EEG headsets with 19 recording electrodes for each of the 1–40 Hz frequencies. Estimates of the information in each symbol stream were calculated from a 60 s moving window of Shannon entropy that was updated each second, providing a quantitative neurodynamic history of the team’s performance. Neurodynamic organizations fluctuated with the task demands with increased organization (i.e., lower entropy occurring when the team needed to resolve uncertainty. These results show that intermediate neurodynamic representations can provide a quantitative bridge between the micro and macro scales of teamwork. 9. Time Resolved Deposition Measurements in NSTX International Nuclear Information System (INIS) Skinner, C.H.; Kugel, H.; Roquemore, A.L.; Hogan, J.; Wampler, W.R. 2004-01-01 Time-resolved measurements of deposition in current tokamaks are crucial to gain a predictive understanding of deposition with a view to mitigating tritium retention and deposition on diagnostic mirrors expected in next-step devices. Two quartz crystal microbalances have been installed on NSTX at a location 0.77m outside the last closed flux surface. This configuration mimics a typical diagnostic window or mirror. The deposits were analyzed ex-situ and found to be dominantly carbon, oxygen, and deuterium. A rear facing quartz crystal recorded deposition of lower sticking probability molecules at 10% of the rate of the front facing one. Time resolved measurements over a 4-week period with 497 discharges, recorded 29.2 (micro)g/cm 2 of deposition, however surprisingly, 15.9 (micro)g/cm 2 of material loss occurred at 7 discharges. The net deposited mass of 13.3 (micro)g/cm 2 matched the mass of 13.5 (micro)g/cm 2 measured independently by ion beam analysis. Monte Carlo modeling suggests that transient processes are likely to dominate the deposition 10. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations Science.gov (United States) 2015-11-01 Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid. 11. Spectral cumulus parameterization based on cloud-resolving model Science.gov (United States) Baba, Yuya 2018-02-01 We have developed a spectral cumulus parameterization using a cloud-resolving model. This includes a new parameterization of the entrainment rate which was derived from analysis of the cloud properties obtained from the cloud-resolving model simulation and was valid for both shallow and deep convection. The new scheme was examined in a single-column model experiment and compared with the existing parameterization of Gregory (2001, Q J R Meteorol Soc 127:53-72) (GR scheme). The results showed that the GR scheme simulated more shallow and diluted convection than the new scheme. To further validate the physical performance of the parameterizations, Atmospheric Model Intercomparison Project (AMIP) experiments were performed, and the results were compared with reanalysis data. The new scheme performed better than the GR scheme in terms of mean state and variability of atmospheric circulation, i.e., the new scheme improved positive bias of precipitation in western Pacific region, and improved positive bias of outgoing shortwave radiation over the ocean. The new scheme also simulated better features of convectively coupled equatorial waves and Madden-Julian oscillation. These improvements were found to be derived from the modification of parameterization for the entrainment rate, i.e., the proposed parameterization suppressed excessive increase of entrainment, thus suppressing excessive increase of low-level clouds. 12. Time Resolved FTIR Analysis of Tailpipe Exhaust for Several Automobiles Science.gov (United States) White, Allen R.; Allen, James; Devasher, Rebecca B. 2011-06-01 The automotive catalytic converter reduces or eliminates the emission of various chemical species (e.g. CO, hydrocarbons, etc.) that are the products of combustion from automobile exhaust. However, these units are only effective once they have reached operating temperature. The design and placement of catalytic converters has changed in order to reduce both the quantity of emissions and the time that is required for the converter to be effective. In order to compare the effectiveness of catalytic converters, time-resolved measurements were performed on several vehicles, including a 2010 Toyota Prius, a 2010 Honda Fit, a 1994 Honda Civic, and a 1967 Oldsmobile 442 (which is not equipped with a catalytic converter but is used as a baseline). The newer vehicles demonstrate bot a reduced overall level of CO and hydrocarbon emissions but are also effective more quickly than older units. The time-resolved emissions will be discussed along with the impact of catalytic converter design and location on the measured emissions. 13. Energy 93, energy in Israel International Nuclear Information System (INIS) Shilo, D.; Bar Mashiah, D.; Er-El, J. 1993-01-01 For the first time this report includes a chapter entitles 'energy and peace'. Following is an overview of israel's energy economy and some principal initiatives in its various sectors during 1992/93 period. 46 figs, 13 tabs 14. Nuclear Energy Institute (NEI) summary International Nuclear Information System (INIS) 2001-01-01 The Nuclear Energy Institute (NEI) provided a brief presentation on the state of energy demand in the United States and discussed the improving economics for new nuclear power plants. He discussed the consolidation of companies under deregulation and the ability of these larger companies to undertake large capital projects such as nuclear power plant construction. He discussed efforts under way to support a new generation of plants but noted that there needs to be greater certainty in the licensing process. He discussed infrastructure challenges in terms of people, hardware, and services to support new and current plants. He stated that there needs to be fair and equitable licensing fees and decommissioning funding assurance for innovative modular designs such as the PBMR. He concluded that NRC challenges will include resolving 10 CFR Part 52 implementation issues, establishing an efficient and predictable process for siting, COL permits and inspection, and an increasing regulatory workload 15. Radiofrequency encoded angular-resolved light scattering DEFF Research Database (Denmark) Buckley, Brandon W.; Akbari, Najva; Diebold, Eric D. 2015-01-01 The sensitive, specific, and label-free classification of microscopic cells and organisms is one of the outstanding problems in biology. Today, instruments such as the flow cytometer use a combination of light scatter measurements at two distinct angles to infer the size and internal complexity...... of cells at rates of more than 10,000 per second. However, by examining the entire angular light scattering spectrum it is possible to classify cells with higher resolution and specificity. Current approaches to performing these angular spectrum measurements all have significant throughput limitations...... Encoded Angular-resolved Light Scattering (REALS), this technique multiplexes angular light scattering in the radiofrequency domain, such that a single photodetector captures the entire scattering spectrum from a particle over approximately 100 discrete incident angles on a single shot basis. As a proof... 16. Resolving coastal conflicts using marine spatial planning. Science.gov (United States) Tuda, Arthur O; Stevens, Tim F; Rodwell, Lynda D 2014-01-15 We applied marine spatial planning (MSP) to manage conflicts in a multi-use coastal area of Kenya. MSP involves several steps which were supported by using geographical information systems (GISs), multi-criteria decision analysis (MCDA) and optimization. GIS was used in identifying overlapping coastal uses and mapping conflict hotspots. MCDA was used to incorporate the preferences of user groups and managers into a formal decision analysis procedure. Optimization was applied in generating optimal allocation alternatives to competing uses. Through this analysis three important objectives that build a foundation for future planning of Kenya's coastal waters were achieved: 1) engaging competing stakeholders; 2) illustrating how MSP can be adapted to aid decision-making in multi-use coastal regions; and 3) developing a draft coastal use allocation plan. The successful application of MSP to resolve conflicts in coastal regions depends on the level of stakeholder involvement, data availability and the existing knowledge base. Copyright © 2013 Elsevier Ltd. All rights reserved. 17. Time - resolved thermography at Tokamak T-10 International Nuclear Information System (INIS) Grunow, C.; Guenther, K.; Lingertat, J.; Chicherov, V.M.; Evstigneev, S.A.; Zvonkov, S.N. 1987-01-01 Thermographic experiments were performed at T-10 tokamak to investigate the thermic coupling of plasma and the limiter. The limiter is an internal equipment of the vacuum vessel of tokamak-type fusion devices and the interaction of plasma with limiter results a high thermal load of limiter for short time. In according to improve the limiter design the temperature distribution on the limiter surface was measured by a time-resolved thermographic method. Typical isotherms and temperature increment curves are presented. This measurement can be used as a systematic plasma diagnostic method because the limiter is installed in the tokamak whereas special additional probes often disturb the plasma discharge. (D.Gy.) 3 refs.; 7 figs 18. Resolving capacity of the circular Zernike polynomials. Science.gov (United States) Svechnikov, M V; Chkhalo, N I; Toropov, M N; Salashchenko, N N 2015-06-01 Circular Zernike polynomials are often used for approximation and analysis of optical surfaces. In this paper, we analyse their lateral resolving capacity, illustrating the effects of a lack of approximation by a finite set of polynomials and answering the following questions: What is the minimum number of polynomials that is necessary to describe a local deformation of a certain size? What is the relationship between the number of approximating polynomials and the spatial spectrum of the approximation? What is the connection between the mean-square error of approximation and the number of polynomials? The main results of this work are the formulas for calculating the error of fitting the relief and the connection between the width of the spatial spectrum and the order of approximation. 19. Time-resolved fluoroimmunoassay of CA125 International Nuclear Information System (INIS) Cai Gangming; Huang Biao; Zhu Liguo; Xiao Hualong; Tan Cheng; Tao Yonghui; Jin Jian 2001-01-01 A two-site time-resolved fluoroimmunoassay (TRFIA) of CA 125 based on the direct sandwich technique has been developed, with the equilibrium method. The monoclonal antibody (MoAb) against CA 125 was labelled with europium by the help of europium-chelate of diethylenetriaminepentaacetic acid (DTPA). The luminescent enhancement system was an enhancement solution that contained mainly of 2-naphthoyltrifluoroacetone. The intra- and inter- assay CV of the TRFIA were 4.5% and 4.0%, respectively, and the recovery rate was 96.7%, the sensitivity was 3.3 μg/mL. The cross-reacting rate with CEA was negligible, and that with AFP and β-HCG was 4.6% and 12.4%, respectively. Compared with the imported IRMA Kit, the correlation coefficient was 0.999 20. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations Science.gov (United States) Angelidis, Dionysios; Sotiropoulos, Fotis 2015-11-01 The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories. 1. Capital-energy complementarity in aggregate energy-economic analysis Energy Technology Data Exchange (ETDEWEB) Hogan, W.W. 1979-10-01 The interplay between capital and energy will affect the outcome of energy-policy initiatives. A static model clarifies the interpretation of the conflicting empirical evidence on the nature of this interplay. This resolves an apparent conflict between engineering and economc interpretations and points to an additional ambiguity that can be resolved by distinguishing between policy issues at aggregated and disaggregated levels. Restrictions on aggregate energy use should induce reductions in the demand for capital and exacerbate the economic impacts of the energy policy. 32 references. 2. Angle-resolved photoelectron cross section of CF4 International Nuclear Information System (INIS) Carlson, T.A.; Fahlman, A.; Svensson, W.A.; Krause, M.O.; Whitley, T.A.; Grimm, F.A.; Piancastelli, M.N.; Taylor, J.W. 1984-01-01 Partial photoelectron cross sections sigma and angular distribution parameters β were obtained for the first five valence orbitals in CF 4 : 1t 1 , 4t 2 , 1e, 3t 2 , and 4a 1 , as a function of photon energy from 17 to 70 eV. These data were taken with the aid of angle-resolved photoelectron spectroscopy and synchrotron radiation. The results were compared with earlier data on CCl 4 . Substantial differences were found. These are explained partly in terms of the absence of a Cooper minimum with a fluorine compound as opposed to the presence of a Cooper minimum with chlorine compounds and partly in terms of the position of shape resonances. Data on CF 4 were also compared with recent calculations of Stephens et al., who used the multiple-scattering Xα method. Structure in the photoelectron spectrum of CF 4 lying on the low energy side of the third band was identified as due to autoionization and evidence is given as to its specific nature 3. Enforcement actions: significant actions resolved. Quarterly progress report, January-June 1982 International Nuclear Information System (INIS) 1982-09-01 This compilation summarizes significant enforcement actions that have been resolved during two quarterly periods (January to June 1982) and includes copies of letters, notices, and orders sent by the Nuclear Regulatory Commission to the licensee with respect to the enforcement action. It is anticipated that the information in this publication will be widely disseminated to managers and employees engaged in activities licensed by the NRC, in the interest of promoting public health and safety as well as common defense and security. The intention is that this publication will be issued on a quarterly basis to include significant enforcement actions resolved during the preceding quarter 4. Annual Energy Review, 2008 Energy Technology Data Exchange (ETDEWEB) None 2009-06-01 The Annual Energy Review (AER) is the Energy Information Administration's (EIA) primary report of annual historical energy statistics. For many series, data begin with the year 1949. Included are statistics on total energy production, consumption, trade, and energy prices; overviews of petroleum, natural gas, coal, electricity, nuclear energy, renewable energy, and international energy; financial and environment indicators; and data unit conversions. 5. Angle-resolved imaging of single-crystal materials with MeV helium ions International Nuclear Information System (INIS) Strathman, M.D.; Baumann, S. 1992-01-01 The simplest form of angle-resolved mapping for single-crystal materials is the creation of a channeling angular scan. Several laboratories have expanded this simple procedure to include mapping as a function of two independent tilts. These angle-resolved images are particularly suited to the assessment of crystal parameters including disorder, lattice location of impurities, and lattice stress. This paper will describe the use of the Charles Evans and Associates RBS-400 scattering chamber for acquisition, display, and analysis of angle-resolved images obtained from backscattered helium ions. Typical data acquisition times are 20 min for a ±2deg X-Y tilt scan with 2500 pixels (8/100deg resolution), and 10 nC per pixel. In addition, we will present a method for automatically aligning crystals for channeling measurements based on this imaging technology. (orig.) 6. Local crystallography analysis for atomically resolved scanning tunneling microscopy images International Nuclear Information System (INIS) Lin, Wenzhi; Li, Qing; Belianinov, Alexei; Gai, Zheng; Baddorf, Arthur P; Pan, Minghu; Jesse, Stephen; Kalinin, Sergei V; Sales, Brian C; Sefat, Athena 2013-01-01 Scanning probe microscopy has emerged as a powerful and flexible tool for atomically resolved imaging of surface structures. However, due to the amount of information extracted, in many cases the interpretation of such data is limited to being qualitative and semi-quantitative in nature. At the same time, much can be learned from local atom parameters, such as distances and angles, that can be analyzed and interpreted as variations of local chemical bonding, or order parameter fields. Here, we demonstrate an iterative algorithm for indexing and determining atomic positions that allows the analysis of inhomogeneous surfaces. This approach is further illustrated by local crystallographic analysis of several real surfaces, including highly ordered pyrolytic graphite and an Fe-based superconductor FeTe 0.55 Se 0.45 . This study provides a new pathway to extract and quantify local properties for scanning probe microscopy images. (paper) 7. Laser induced vaporization time resolved mass spectrometry of refractories International Nuclear Information System (INIS) Bonnell, D.W.; Schenck, P.K.; Hastie, J.W. 1988-01-01 An experimental approach is described which can yield information about refractory surfaces by examining the time history of the gasdynamic process occurring during pulsed Nd/YAG laser induced degradation/vaporization of the surface. Boron nitride (BN) and graphite are considered as example systems. Time resolved mass spectrometric measurements of evolved species permit direct determination of gas species identities and concentration, independent of mass spectral cracking patterns. Of particular note is the observation of local thermodynamic equilibrium in both systems for the observed gas species laser vaporized from surfaces at temperatures of 2900 K (BN) and 3800-4100 K (graphite). Indirect methods of determining surface temperature, as alternatives to direct measurement of radiance temperature, are discussed. Also, a preliminary analysis of time-of-arrival (TOA), data is presented, including discussion of the elimination of amplifier RG response delays convoluted with the TOA data and extraction of true species time-of-arrival distributions 8. Spatially resolved fish population analysis for designing MPAs DEFF Research Database (Denmark) Christensen, Asbjørn; Mosegaard, Henrik; Jensen, Henrik 2009-01-01 The sandeel population analysis model (SPAM) is presented as a simulation tool for exploring the efficiency of Marine Protected Areas (MPAs) for sandeel stocks. SPAM simulates spatially resolved sandeel population distributions, based on a high-resolution map of all fishery-established sandbank....... The SPAM framework was tested using ICES statistical rectangle 37F2 as an MPA, and the impact on sandeel populations within the MPA and neighbouring habitats was investigated. Increased larval spillover compensated for lost catches inside the MPA. The temporal and spatial scales of stock response to MPAs...... demonstrated that ecosystem self-regulation must be included when modelling the efficiency of MPAs, and for lesser sandeel, that self-regulation partially counteracts the benefits of a fishing sanctuary. The use of realistic habitat connectivity is critical for both qualitative and quantitative MPA assessment... 9. Electric power monthly, September 1990. [Glossary included Energy Technology Data Exchange (ETDEWEB) 1990-12-17 The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK) 10. Geothermal energy International Nuclear Information System (INIS) Anon. 1992-01-01 This chapter discusses the role of geothermal energy may have on the energy future of the US. The topics discussed in the chapter include historical aspects of geothermal energy, the geothermal resource, hydrothermal fluids, electricity production, district heating, process heating, geopressured brines, technology and costs, hot dry rock, magma, and environmental and siting issues 11. Energy Storage. Science.gov (United States) Eaton, William W. Described are technological considerations affecting storage of energy, particularly electrical energy. The background and present status of energy storage by batteries, water storage, compressed air storage, flywheels, magnetic storage, hydrogen storage, and thermal storage are discussed followed by a review of development trends. Included are… 12. Panchromatic SED modelling of spatially-resolved galaxies Science.gov (United States) Smith, Daniel J. B.; Hayward, Christopher C. 2018-02-01 We test the efficacy of the energy-balance spectral energy distribution (SED) fitting code MAGPHYS for recovering the spatially-resolved properties of a simulated isolated disc galaxy, for which it was not designed. We perform 226,950 MAGPHYS SED fits to regions between 0.2 kpc and 25 kpc in size across the galaxy's disc, viewed from three different sight-lines, to probe how well MAGPHYS can recover key galaxy properties based on 21 bands of UV-far-infrared model photometry. MAGPHYS yields statistically acceptable fits to >99 per cent of the pixels within the r-band effective radius and between 59 and 77 percent of pixels within 20 kpc of the nucleus. MAGPHYS is able to recover the distribution of stellar mass, star formation rate (SFR), specific SFR, dust luminosity, dust mass, and V-band attenuation reasonably well, especially when the pixel size is ≳ 1 kpc, whereas non-standard outputs (stellar metallicity and mass-weighted age) are recovered less well. Accurate recovery is more challenging in the smallest sub-regions of the disc (pixel scale ≲ 1 kpc), where the energy balance criterion becomes increasingly incorrect. Estimating integrated galaxy properties by summing the recovered pixel values, the true integrated values of all parameters considered except metallicity and age are well recovered at all spatial resolutions, ranging from 0.2 kpc to integrating across the disc, albeit with some evidence for resolution-dependent biases. These results must be considered when attempting to analyse the structure of real galaxies with actual observational data, for which the ground truth' is unknown. 13. Time-resolved ARPES with sub-15 fs temporal and near Fourier-limited spectral resolution. Science.gov (United States) Rohde, G; Hendel, A; Stange, A; Hanff, K; Oloff, L-P; Yang, L X; Rossnagel, K; Bauer, M 2016-10-01 An experimental setup for time- and angle-resolved photoelectron spectroscopy with sub-15 fs temporal resolution is presented. A hollow-fiber compressor is used for the generation of 6.5 fs white light pump pulses, and a high-harmonic-generation source delivers 11 fs probe pulses at a photon energy of 22.1 eV. A value of 13 fs full width at half-maximum of the pump-probe cross correlation signal is determined by analyzing a photoemission intensity transient probing a near-infrared interband transition in 1T-TiSe 2 . Notably, the energy resolution of the setup conforms to typical values reported in conventional time-resolved photoemission studies using high harmonics, and an ultimate resolution of 170 meV is feasible. 14. Energy Statistics International Nuclear Information System (INIS) Anon. 1994-01-01 For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources 15. Resolving Radiological Classification and Release Issues for Many DOE Solid Wastes and Salvageable Materials International Nuclear Information System (INIS) Hochel, R.C. 1999-01-01 The cost effective radiological classification and disposal of solid materials with potential volume contamination, in accordance with applicable U.S. Department of Energy (DOE) Orders, suffers from an inability to unambiguously distinguish among transuranic waste, low-level waste, and unconditional-release materials. Depending on the classification, disposal costs can vary by a hundred-fold. But in many cases, the issues can be easily resolved by a combination of process information, some simple measurements, and calculational predictions from a computer model for radiation shielding.The proper classification and disposal of many solid wastes requires a measurement regime that is able to show compliance with a variety of institutional and regulatory contamination limits. Although this is not possible for all solid wastes, there are many that do lend themselves to such measures. Several examples are discussed which demonstrate the possibilities, including one which was successfully applied to bulk contamination.The only barriers to such broader uses are the slow-to-change institutional perceptions and procedures. For many issues and materials, the measurement tools are available; they need only be applied 16. Resolving Radiological Classification and Release Issues for Many DOE Solid Wastes and Salvageable Materials Energy Technology Data Exchange (ETDEWEB) Hochel, R.C. 1999-06-14 The cost effective radiological classification and disposal of solid materials with potential volume contamination, in accordance with applicable U.S. Department of Energy (DOE) Orders, suffers from an inability to unambiguously distinguish among transuranic waste, low-level waste, and unconditional-release materials. Depending on the classification, disposal costs can vary by a hundred-fold. But in many cases, the issues can be easily resolved by a combination of process information, some simple measurements, and calculational predictions from a computer model for radiation shielding.The proper classification and disposal of many solid wastes requires a measurement regime that is able to show compliance with a variety of institutional and regulatory contamination limits. Although this is not possible for all solid wastes, there are many that do lend themselves to such measures. Several examples are discussed which demonstrate the possibilities, including one which was successfully applied to bulk contamination.The only barriers to such broader uses are the slow-to-change institutional perceptions and procedures. For many issues and materials, the measurement tools are available; they need only be applied. 17. Time-resolved spectroscopy of nonequilibrium ionization in laser-produced plasmas International Nuclear Information System (INIS) Marjoribanks, R.S. 1988-01-01 The highly transient ionization characteristic of laser-produced plasmas at high energy densities has been investigated experimentally, using x-ray spectroscopy with time resolution of less than 20 ps. Spectroscopic diagnostics of plasma density and temperature were used, including line ratios, line profile broadening and continuum emission, to characterize the plasma conditions without relying immediately on ionization modeling. The experimentally measured plasma parameters were used as independent variables, driving an ionization code, as a test of ionization modeling, divorced from hydrodynamic calculations. Several state-of-the-art streak spectrographs, each recording a fiducial of the laser peak along with the time-resolved spectrum, characterized the laser heating of thin signature layers of different atomic numbers imbedded in plastic targets. A novel design of crystal spectrograph, with a conically curved crystal, was developed. Coupled with a streak camera, it provided high resolution (λ/ΔΛ > 1000) and a collection efficiency roughly 20-50 times that of planar crystal spectrographs, affording improved spectra for quantitative reduction and greater sensitivity for the diagnosis of weak emitters. Experimental results were compared to hydrocode and ionization code simulations, with poor agreement. The conclusions question the appropriateness of describing electron velocity distributions by a temperature parameter during the time of laser illumination and emphasis the importance of characterizing the distribution more generally 18. Neutron measurements with Time-Resolved Event-Counting Optical Radiation (TRECOR) detector Science.gov (United States) Brandis, M.; Vartsky, D.; Dangendorf, V.; Bromberger, B.; Bar, D.; Goldberg, M. B.; Tittelmeier, K.; Friedman, E.; Czasch, A.; Mardor, I.; Mor, I.; Weierganz, M. 2012-04-01 Results are presented from the latest experiment with a new neutron/gamma detector, a Time-Resolved, Event-Counting Optical Radiation (TRECOR) detector. It is composed of a scintillating fiber-screen converter, bending mirror, lens and Event-Counting Image Intensifier (ECII), capable of specifying the position and time-of-flight of each event. TRECOR is designated for a multipurpose integrated system that will detect Special Nuclear Materials (SNM) and explosives in cargo. Explosives are detected by Fast-Neutron Resonance Radiography, and SNM by Dual Discrete-Energy gamma-Radiography. Neutrons and gamma-rays are both produced in the 11B(d,n+γ)12C reaction. The two detection modes can be implemented simultaneously in TRECOR, using two adjacent radiation converters that share a common optical readout. In the present experiment the neutron detection mode was studied, using a plastic scintillator converter. The measurements were performed at the PTB cyclotron, using the 9Be(d,n) neutron spectrum obtained from a thick Be-target at Ed ~ 13 MeV\\@. The basic characteristics of this detector were investigated, including the Contrast Transfer Function (CTF), Point Spread Function (PSF) and elemental discrimination capability. 19. Resolving Radiological Classification and Release Issues for Many DOE Solid Wastes and Salvageable Materials International Nuclear Information System (INIS) Hochel, R.C. 1999-01-01 The cost effective radiological classification and disposal of solid materials with potential volume contamination, in accordance with applicable U.S. Department of Energy (DOE) Orders, suffers from an inability to unambiguously distinguish among transuranic waste, low-level waste, and unconditional-release materials in a generic way allowing in-situ measurement and verification. Depending on a material''s classification, disposal costs can vary by a hundred-fold. With these large costs at risk, the issues involved in making defensible decisions are ripe for closer scrutiny. In many cases, key issues can be easily resolved by a combination of process information, some simple measurements, and calculational predictions from a computer model for radiation shielding. The proper classification and disposal of many solid wastes requires a measurement regime that is able to show compliance with a variety of institutional and regulatory contamination limits. Ultimate responsibility for this, of course, rests with radiological control or health physics organization of the individual site, but there are many measurements which can be performed by operations and generation organizations to simplify the process and virtually guarantee acceptance. Although this is not possible for all potential solid wastes, there are many that do lend themselves to such measures, particularly some of large volumes and realizable cost savings. Mostly what is needed for this to happen are a few guiding rules, measurement procedures, and cross checks for potential pitfalls. Several examples are presented here and discussed that demonstrate the possibilities, including one which was successfully applied to bulk contamination 20. HERSCHEL -RESOLVED OUTER BELTS OF TWO-BELT DEBRIS DISKS—EVIDENCE OF ICY GRAINS Energy Technology Data Exchange (ETDEWEB) Morales, F. Y.; Bryden, G.; Werner, M. W.; Stapelfeldt, K. R., E-mail: Farisa@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States) 2016-11-01 We present dual-band Herschel /PACS imaging for 59 main-sequence stars with known warm dust ( T {sub warm} ∼ 200 K), characterized by Spitzer . Of 57 debris disks detected at Herschel wavelengths (70 and/or 100 and 160 μ m), about half have spectral energy distributions (SEDs) that suggest two-ring disk architectures mirroring that of the asteroid–Kuiper Belt geometry; the rest are consistent with single belts of warm, asteroidal material. Herschel observations spatially resolve the outer/cold dust component around 14 A-type and 4 solar-type stars with two-belt systems, 15 of which for the first time. Resolved disks are typically observed with radii >100 AU, larger than expected from a simple blackbody fit. Despite the absence of narrow spectral features for ice, we find that the shape of the continuum, combined with resolved outer/cold dust locations, can help constrain the grain size distribution and hint at the dust’s composition for each resolved system. Based on the combined Spitzer /IRS+Multiband Imaging Photometer (5-to-70 μ m) and Herschel /PACS (70-to-160 μ m) data set, and under the assumption of idealized spherical grains, we find that over half of resolved outer/cold belts are best fit with a mixed ice/rock composition. Minimum grain sizes are most often equal to the expected radiative blowout limit, regardless of composition. Three of four resolved systems around the solar-type stars, however, tend to have larger minimum grains compared to expectation from blowout ( f {sub MB} = a {sub min}/ a {sub BOS} ∼ 5). We also probe the disk architecture of 39 Herschel -unresolved systems by modeling their SEDs uniformly, and find them to be consistent with 31 single- and 8 two-belt debris systems. 1. The Resolved Stellar Population of Leo A Science.gov (United States) Tolstoy, Eline 1996-05-01 New observations of the resolved stellar population of the extremely metal-poor Magellanic dwarf irregular galaxy Leo A in Thuan-Gunn r, g, i, and narrowband Hα filters are presented. Using the recent Cepheid variable star distance determination to Leo A by Hoessel et al., we are able to create an accurate color-magnitude diagram (CMD). We have used the Bavesian inference method described by Tolstoy & Saha to calculate the likelihood of a Monte Carlo simulation of the stellar population of Leo A being a good match to the data within the well understood errors in the data. The magnitude limits on our data are sensitive enough to look back at ~1 Gyr of star formation history at the distance of Leo A. To explain the observed ratio of red to blue stars in the observed CMD, it is necessary to invoke either a steadily decreasing star formation rate toward the present time or gaps in the star formation history. We also compare the properties of the observed stellar population with the known spatial distribution of the H I gas and H II regions to support the conclusions from CMD modeling. We consider the possibility that currently there is a period of diminished star formation in Leo A, as evidenced by the lack of very young stars in the CMD and the faint H II regions. How the chaotic H I distribution, with no observable rotation, fits into our picture of the evolution of Leo A is as yet unclear. 2. Resolved Parental Infertility and Children's Educational Achievement. Science.gov (United States) Branigan, Amelia R; Helgertz, Jonas 2017-06-01 Although difficulty conceiving a child has long been a major medical and social preoccupation, it has not been considered as a predictor of long-term outcomes in children ultimately conceived. This is consistent with a broader gap in knowledge regarding the consequences of parental health for educational performance in offspring. Here we address that omission, asking how resolved parental infertility relates to children's academic achievement. In a sample of all Swedish births between 1988 and 1995, we find that involuntary childlessness prior to either a first or a second birth is associated with lower academic achievement (both test scores and GPA) in children at age 16, even if the period of infertility was prior to a sibling's birth rather than the child's own. Our results support a conceptualization of infertility as a cumulative physical and social experience with effects extending well beyond the point at which a child is born, and emphasize the need to better understand how specific parental health conditions constrain children's educational outcomes. 3. Component-resolved diagnostics in vernal conjunctivitis. Science.gov (United States) Armentia, Alicia; Sanchís, Eugenia; Montero, Javier A 2016-10-01 Conventional diagnostic tests in allergy are insufficient to clarify the cause of vernal conjunctivitis. Component-resolved diagnostic (CRD) by microarray allergen assay may be useful in detecting allergens that might be involved in the inflammatory process. In a recent trial in patients suffered from eosinophilic esophagitis, after 2 years of the CRD-guided exclusion diet and specific immunotherapy, significant clinical improvement was observed, and 68% of patients were discharged (cure based on negative biopsy, no symptoms, and no medication intake). Our new objective was to evaluate IgE-mediated hypersensitivity by CRD in tears and serum from patients with vernal conjunctivitis and treat patients with identified triggering allergens by specific immunotherapy. Twenty-five patients with vernal conjunctivitis were evaluated. The identified triggering allergens were n Lol p 1 (11 cases), n Cyn d 1 (eight cases), group 4 and 6 grasses (six cases) and group 5 of grasses (five cases). Prick test and pollen IgE were positive in one case. Clinical improvement was observed in 13/25 vernal conjunctivitis patients after 1-year specific immunotherapy. CRD seems to be a more sensitive diagnostic tool compared with prick test and IgE detection. Specific CRD-led immunotherapy may achieve clinical improvements in vernal conjunctivitis patients. 4. Component Resolved Diagnosis in Hymenoptera Anaphylaxis. Science.gov (United States) Tomsitz, D; Brockow, K 2017-06-01 Hymenoptera anaphylaxis is one of the leading causes of severe allergic reactions and can be fatal. Venom-specific immunotherapy (VIT) can prevent a life-threatening reaction; however, confirmation of an allergy to a Hymenoptera venom is a prerequisite before starting such a treatment. Component resolved diagnostics (CRD) have helped to better identify the responsible allergen. Many new insect venom allergens have been identified within the last few years. Commercially available recombinant allergens offer new diagnostic tools for detecting sensitivity to insect venoms. Additional added sensitivity to nearly 95% was introduced by spiking yellow jacket venom (YJV) extract with Ves v 5. The further value of CRD for sensitivity in YJV and honey bee venom (HBV) allergy is more controversially discussed. Recombinant allergens devoid of cross-reactive carbohydrate determinants often help to identify the culprit venom in patients with double sensitivity to YJV and HBV. CRD identified a group of patients with predominant Api m 10 sensitization, which may be less well protected by VIT, as some treatment extracts are lacking this allergen. The diagnostic gap of previously undetected Hymenoptera allergy has been decreased via production of recombinant allergens. Knowledge of analogies in interspecies proteins and cross-reactive carbohydrate determinants is necessary to distinguish relevant from irrelevant sensitizations. 5. Fully Resolved Simulations of 3D Printing Science.gov (United States) Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai 2017-11-01 Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the `ground truth'' for reduced order models is discussed. 6. Time resolved ion beam induced charge collection International Nuclear Information System (INIS) Sexton W, Frederick; Walsh S, David; Doyle L, Barney; Dodd E, Paul 2000-01-01 Under this effort, a new method for studying the single event upset (SEU) in microelectronics has been developed and demonstrated. Called TRIBICC, for Time Resolved Ion Beam Induced Charge Collection, this technique measures the transient charge-collection waveform from a single heavy-ion strike with a -.03db bandwidth of 5 GHz. Bandwidth can be expanded up to 15 GHz (with 5 ps sampling windows) by using an FFT-based off-line waveform renormalization technique developed at Sandia. The theoretical time resolution of the digitized waveform is 24 ps with data re-normalization and 70 ps without re-normalization. To preserve the high bandwidth from IC to the digitizing oscilloscope, individual test structures are assembled in custom high-frequency fixtures. A leading-edge digitized waveform is stored with the corresponding ion beam position at each point in a two-dimensional raster scan. The resulting data cube contains a spatial charge distribution map of up to 4,096 traces of charge (Q) collected as a function of time. These two dimensional traces of Q(t) can cover a period as short as 5 ns with up to 1,024 points per trace. This tool overcomes limitations observed in previous multi-shot techniques due to the displacement damage effects of multiple ion strikes that changed the signal of interest during its measurement. This system is the first demonstration of a single-ion transient measurement capability coupled with spatial mapping of fast transients 7. Resolving Gas-Phase Metallicity In Galaxies Science.gov (United States) Carton, David 2017-06-01 Chapter 2: As part of the Bluedisk survey we analyse the radial gas-phase metallicity profiles of 50 late-type galaxies. We compare the metallicity profiles of a sample of HI-rich galaxies against a control sample of HI-'normal' galaxies. We find the metallicity gradient of a galaxy to be strongly correlated with its HI mass fraction {M}{HI}) / {M}_{\\ast}). We note that some galaxies exhibit a steeper metallicity profile in the outer disc than in the inner disc. These galaxies are found in both the HI-rich and control samples. This contradicts a previous indication that these outer drops are exclusive to HI-rich galaxies. These effects are not driven by bars, although we do find some indication that barred galaxies have flatter metallicity profiles. By applying a simple analytical model we are able to account for the variety of metallicity profiles that the two samples present. The success of this model implies that the metallicity in these isolated galaxies may be in a local equilibrium, regulated by star formation. This insight could provide an explanation of the observed local mass-metallicity relation. Chapter 3 We present a method to recover the gas-phase metallicity gradients from integral field spectroscopic (IFS) observations of barely resolved galaxies. We take a forward modelling approach and compare our models to the observed spatial distribution of emission line fluxes, accounting for the degrading effects of seeing and spatial binning. The method is flexible and is not limited to particular emission lines or instruments. We test the model through comparison to synthetic observations and use downgraded observations of nearby galaxies to validate this work. As a proof of concept we also apply the model to real IFS observations of high-redshift galaxies. From our testing we show that the inferred metallicity gradients and central metallicities are fairly insensitive to the assumptions made in the model and that they are reliably recovered for galaxies 8. Angle-resolved ion TOF spectrometer with a position sensitive detector Energy Technology Data Exchange (ETDEWEB) Saito, Norio [Electrotechnical Lab., Tsukuba, Ibaraki (Japan); Heiser, F.; Wieliczec, K.; Becker, U. 1996-07-01 A angle-resolved ion time-of-flight mass spectrometer with a position sensitive anode has been investigated. Performance of this spectrometer has been demonstrated by measuring an angular distribution of a fragment ion pair, C{sup +} + O{sup +}, from CO at the photon energy of 287.4 eV. The obtained angular distribution is very close to the theoretically expected one. (author) 9. Ultrafast Structural Dynamics in InSb Probed by Time-Resolved X-Ray Diffraction International Nuclear Information System (INIS) Chin, A.H.; Shank, C.V.; Chin, A.H.; Schoenlein, R.W.; Shank, C.V.; Glover, T.E.; Leemans, W.P.; Balling, P. 1999-01-01 Ultrafast structural dynamics in laser-perturbed InSb are studied using time-resolved x-ray diffraction with a novel femtosecond x-ray source. We report the first observation of a delay in the onset of lattice expansion, which we attribute to energy relaxation processes and lattice strain propagation. In addition, we observe direct indications of ultrafast disordering on a subpicosecond time scale. copyright 1999 The American Physical Society 10. 239Pu neutron cross-sections in the resolved-resonance region International Nuclear Information System (INIS) Luk'yanov, A.A.; Kolesov, V.V.; Toshkov, S.; Yaneva, N. 1988-01-01 The authors have determined the multi-level parameters for description of the total and fission cross-sections for 239 Pu in the resolved-resonance region up to 500 eV. A method has been developed for the construction of the elastic scattering and radiative capture resonance cross-sections using these parameters. The group-averaged cross-sections for experimental and evaluated data have been calculated in the energy region considered. (author). Refs, 4 tabs 11. Time-resolved phase measurement of a self-amplified free-electron laser International Nuclear Information System (INIS) We report on the first time-resolved phase measurement on self-amplified spontaneous emission (SASE) free-electron laser (FEL) pulses. We observed that the spikes in the output of such free-electron laser pulses have an intrinsic positive chirp. We also observed that the energy chirp in the electron bunch mapped directly into the FEL output. Under certain conditions, the two chirps cancel each other. The experimental result was compared with simulations and interpreted with SASE theory 12. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy OpenAIRE Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao 2016-01-01 Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra ena... 13. Electronic structure of Sr2RuO4 studied by angle-resolved photoemission spectroscopy International Nuclear Information System (INIS) Iwasawa, H.; Aiura, Y.; Saitoh, T.; Yoshida, Y.; Hase, I.; Ikeda, S.I.; Bando, H.; Kubota, M.; Ono, K. 2007-01-01 Electronic structure of the monolayer strontium ruthenate Sr 2 RuO 4 was investigated by high-resolution angle-resolved photoemission spectroscopy. We present photon-energy (hν) dependence of the electronic structure near the Fermi level along the ΓM line. The hν dependence has shown a strong spectral weight modulation of the Ru 4d xy and 4d zx bands 14. Study of High Temperature Superconductors with Angle-Resolved Photoemission Spectroscopy Energy Technology Data Exchange (ETDEWEB) Dunn, Lisa 2003-05-13 The Angle Resolved Photoemission Spectroscopy (ARPES) recently emerged as a powerful tool for the study of highly correlated materials. This thesis describes the new generation of ARPES experiment, based on the third generation synchrotron radiation source and utilizing very high resolution electron energy and momentum analyzer. This new setup is used to study the physics of high temperature superconductors. New results on the Fermi surfaces, dispersions, scattering rate and superconducting gap in high temperature superconductors are presented. 15. (including travel dates) Proposed itinerary Ashok 31 July to 22 August 2012 (including travel dates). Proposed itinerary: Arrival in Bangalore on 1 August. 1-5 August: Bangalore, Karnataka. Suggested institutions: Indian Institute of Science, Bangalore. St Johns Medical College & Hospital, Bangalore. Jawaharlal Nehru Centre, Bangalore. 6-8 August: Chennai, TN. 16. Handbook on energy conservation International Nuclear Information System (INIS) 1989-12-01 This book shows energy situation in recent years, which includes reserves of energy resource in the world, crude oil production records in OPEC and non OPEC, supply and demand of energy in important developed countries, prospect of supply and demand of energy and current situation of energy conservation in developed countries. It also deals with energy situation in Korea reporting natural resources status, energy conservation policy, measurement for alternative energy, energy management of Korea, investment in equipment and public education for energy conservation. 17. Broadband Comb-Resolved Cavity Enhanced Spectrometer with Graphene Modulator Science.gov (United States) Lee, Kevin; Mohr, Christian; Jiang, Jie; Fermann, Martin; Lee, Chien-Chung; Schibli, Thomas R.; Kowzan, Grzegorz; Maslowski, Piotr 2015-06-01 Optical cavities enhance sensitivity in absorption spectroscopy. While this is commonly done with single wavelengths, broad bandwidths can be coupled into the cavity using frequency combs. The combination of cavity enhancement and broad bandwidth allows simultaneous measurement of tens of transitions with high signal-to-noise for even weak near-infrared transitions. This removes the need for time-consuming sequencing acquisition or long-term averaging, so any systematic errors from long-term drifts of the experimental setup or slow changes of sample composition are minimized. Resolving comb lines provides a high accuracy, absolute frequency axis. This is of great importance for gas metrology and data acquisition for future molecular lines databases, and can be applied to simultaneous trace-gas detection of gas mixtures. Coupling of a frequency comb into a cavity can be complex, so we introduce and demonstrate a simplification. The Pound-Drever-Hall method for locking a cavity and a frequency comb together requires a phase modulation of the laser output. We use the graphene modulator that is already in the Tm fiber laser cavity for controlling the carrier envelope offset of the frequency comb, rather than adding a lossy external modulator. The graphene modulator can operate at frequencies of over 1~ MHz, which is sufficient for controlling the laser cavity length actuator which operates below 100~kHz. We match the laser cavity length to fast variations of the enhancement cavity length. Slow variations are stabilized by comparison of the pulse repetition rate to a GPS reference. The carrier envelope offset is locked to a constant value chosen to optimize the transmitted spectrum. The transmitted pulse train is a stable frequency comb suitable for long measurements, including the acquisition of comb-resolved Fourier transform spectra with a minimum absorption coefficient of about 2×10-7 wn. For our 38 cm long enhancement cavity, the comb spacing is 394~MHz. With our 18. Spatiotemporally resolved characteristics of a gliding arc discharge in a turbulent air flow at atmospheric pressure DEFF Research Database (Denmark) Zhu, Jiajian; Gao, Jinlong; Ehn, Andreas 2017-01-01 A gliding arc discharge was generated in a turbulent air flow at atmospheric pressure driven by a 35 kHz alternating current (AC) electric power. The spatiotemporally resolved characteristics of the gliding arc discharge, including glow-type discharges, spark-type discharges, short-cutting events... 19. Enforcement actions: significant actions resolved. Quarterly progress report, October-December 1985. Volume 4, No. 4 International Nuclear Information System (INIS) 1986-02-01 This compilation summarizes significant enforcement actions that have been resolved during one quarterly period (October - December 1985) and includes copies of letters, Notices, and Orders sent by the Nuclear Regulatory commission to licensees with respects to these enforcement actions, and the licensees' responses 20. The uses of alternative dispute resolution to resolve genetic disputes. Final report Energy Technology Data Exchange (ETDEWEB) Stein, Robert E. 2003-01-01 The report sets out lessons learned while carrying out the study. It concludes that genetic disputes will increase in number and that ADR processes including mediation, arbitration, the use of independent experts and court-appointed masters can be helpful in resolving them. It suggests additional effort on bioremediation, and workplace disputes and training for ADR neutrals.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7467578649520874, "perplexity": 4876.354262351582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00119.warc.gz"}
https://together.jolla.com/question/142007/how-to-unlock-3g-or-lte-bands-on-intex-aqua-fish/?sort=latest
# How to unlock 3G, or LTE bands on Intex Aqua Fish Intex Aqua Fish supports 2300\1800\850 frequencies, Bands 3,5 (FDD) and 40 (TDD). But it doesnt work in Russia, or the USA for example. What can be done to make 3G, 4G/LTE work again? edit retag close delete 2 Does it not support more? It is the same as the Jolla C, and that phone support more. Same snapdragon. Maybe we need to wait for a new update of SailfishOS. ( 2016-08-17 23:25:23 +0300 )edit 9 @richdb: The snapdragon chipset supports all LTE bands, but only some of them are unlocked on the phones. The unlocked bands d iffer on Jolla C and aqua fish. I don't know wheter this is a pure software issue or whether there are (minor) HW differences (e.g. antenna design) as well. ( 2016-08-18 11:18:38 +0300 )edit 1 retagged, so it's easier to find ( 2016-08-18 19:40:34 +0300 )edit 1 I am using Intex Aqua Fish in Russia. LTE works perfectly at least within MTS. ( 2016-08-20 15:14:51 +0300 )edit Fedorka, which region? ( 2016-08-21 09:46:19 +0300 )edit Sort by » oldest newest most voted How can I add u in xmpp? more I have successfully used the method posted here and there to unlock all wcdma bands on my aqua fish. Unfortunately, achieving these needs M\$WIN and a lot of proprietary software. Diagnostic mode of devices running sfos could be enabled within "Settings/USB". NOTE: Please leave the gsm bits as-is, otherwise the sim2 will go malfunctional when sim1 is used as the primary card. wcdma bits could all be selected with no seen drawback. Fortunately, all modification applied with such method is reversible. Unlocking LTE bands needs using NV-items_reader_writer posted here to modify NV 6828 and 6829. (both share the same content) These two nv items will revert to their last value if unlocking is failed after the phone reboots to apply their modification, so if your modifications are retained after reboot, the unlocking seems to succeed. Using NV-items_reader_writer to read values of NV 6828 and 6829 (via "Range") to a text file (hexdump-like format), you can now modify the file according to the "Byte-swapped" representation of the value calculated with NV Calculator, then use NV-items_reader_writer to write the modified text file back to phone. Wait until all the signal indication bars turn dark, then write the modified text again. After another 10 seconds you can turn off and relaunch the phone. After its reboot, read the two NV Items again to check whether modifications are retained. I have modified NV 6828 and 6829 of my own phone from 14 00 00 00 80 00 (bands 3,5,40) to D5 00 08 00 E0 01 (bands 1,3,5,7,8,20,38,39,40,41), but have not tried sim cards provided by carriers other than China Unicom, which is said to work on bands 1,3,8,40,41. more Hello! Do u use telegram or WhatsApp or viber? Wanna to ask you a couple question :) ( 2017-10-02 13:13:02 +0300 )edit They are all proprietary. I use irc and xmpp. ( 2017-10-02 13:14:37 +0300 )edit @mr.upolo - why you want to ask private? :) Share it with us, I'm the one who would like to unlock that too. But I have still no time for it. I just wanted to know, if it unlocks some frequencies for Europe, if somebody try it ;) @persmule - thx for your information man ;) ( 2017-10-02 13:42:12 +0300 )edit 1 @mr.upolo How can I add u in xmpp? Why not post your own account here first? ( 2017-10-02 13:52:45 +0300 )edit upolo@conversations.im ( 2017-10-02 13:56:28 +0300 )edit Intex Aquafish is essentially useless in the US on TMobile since 3G or 4G/LTE does not work any longer. The hardware supports it, but its either newer qualcom drivers installed with the hardware or sailfish os changes during updates that threw the functionality out. The phone while nice is basically a colourful door stopper at 2G! more 3 Please stop this. Aqua Fish have different hardware modem and antennas not designed to work in frequencies other than 2300\1800\850. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!THIS HARDWARE LIMITATION!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ( 2017-05-17 00:38:08 +0300 )edit @coderus not true: 1900 GSM is supported by tmobile which is 3G an 4G i believe, i had it running on my Intex Last Year in the US. Band 2, 15 LTE Uplink possible using 1900 Band 33, 35 LTE Uplink and Downlink possible using 1900 frequency supported by Aquafish modem. So when i say i had LTE running correctly and successfully in tge US i mean it. ( 2017-05-17 17:57:41 +0300 )edit 2 @DarkTuring: 2G/GSM support on a given frequency does not imply 3G/UMTS or 4G/LTE support, please stop spreading misinformation. ( 2017-05-18 08:41:51 +0300 )edit This post is a wiki. Anyone with karma >75 is welcome to improve it. Intex Aqua Fish is working perfectly, also on LTE/4G in Switzerland. I'm a very happy customer :-) more I see lot of proposed process, but I cant see if anyone really succeeded doing it. please can someone confirm that it is practically possible and that it works? more 2 Answer is straight forward: As of now there are no reports of success to unlock the "european" bands on Aquafish. ( 2016-09-09 21:13:58 +0300 )edit 1 thank you, at least it's clear. so, we are curently completly out of possibility to get a Sailfish powered phone in europe: - Jolla C: out of stock - Intex Aqua Fish: uncompatible - Oysters SF: Not available I will pray nothing happen to my old Jolla the first ( 2016-09-09 21:29:16 +0300 )edit I try flash modem EU on Intex. It's not work and try erase modemst1, modemst2 and flash persist. Try with QTSP and QXDM flash values from Jolla C on Intex (like manual unlock bands on OnePlus One ). Try custom values. Driver to connect with QTSP/QXDM in Diagnostic mode https://yadi.sk/d/by5jzvtZuVZwH (Lenovo Diagnostic Interface) It's all not work. Thanx Coderus and russian community, who trying with me and help. more 1 Can you please explain how exactly did you try to flash EU modem firmware? Was it the way @g7 suggested in his reply (installing package from repos and runing flash-modem.sh on boot) or some other method? ( 2016-08-28 20:50:24 +0300 )edit Metod install modem with preinit on my device failed. I simple copy .bin in /boot and run flash-modem.sh And IN modem back this metod. ( 2016-08-28 21:46:37 +0300 )edit 1 I tried with: pkcon remove droid-modem-l500d-in which remove intex version and automatically install eu; then I manualy flash firmware. At reboot I'm not able to use SIM and my logs are full of: Sep 01 12:20:01 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:02 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.Failed: Operation failed Sep 01 12:20:05 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:07 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:09 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:11 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:13 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:15 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:17 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:18 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.Failed: Operation failed Sep 01 12:20:21 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:23 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:25 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:27 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:29 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:31 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:33 Sailfish connmand[624]: [ofono] ERROR! GDBus.Error:org.ofono.Error.InProgress: Operation already in progress Sep 01 12:20:34 Sailfish ofonod[622]: Power request failed: GENERIC_FAILURE Rollbacking, connected to a WiFi :-), with: pkcon remove droid-modem-l500d-eu and a manual reflash, everything works again. Here in northen Italy I can use LTE with original firmware (Vodafone here uses Band-3), so this is not a big problem....but sure I'd like to flash eu firmware. ( 2016-09-01 15:01:34 +0300 )edit I live in the middle of germany in outside of any city in an rural area. My Jolla 1 always connects to Internet with LTE. I think it must be the LTE Band 800 Mhz. 2 Days ago I got my Aqua Fish. This Phone only connects with 3 G (UMTS) or 2G. In the City the Aqua Fish connects with LTE. I think it must be the 1800 Mhz Band. The Specs of Aqua Fish says, that it supports the 850 Mhz Band. So the Difference between 800 Mhz and 850 Mhz should not be to big. I think it must be possible to activate the 800 Mhz Band on the Aqua Fish with a small Software Update. In another Thread somebody told me, that Jolla Zendesk said, there are small Differences in the Hardware between the Jolla C and Aqua Fish and so its not possible to activate the LTE Band for Europe. But the Difference between 800 Mhz and 850 Mhz should be small enough to bridge it with a small Software Change. Correct me if I am wrong with my thoughts. ( 2016-09-03 21:58:54 +0300 )edit Aqua Fish has band 5, which is 824MHz – 849 MHz (uplink) and 869 MHz – 894 MHz (downlink). Band 20 (800MHz) however is 832 MHz – 862 MHz (uplink) and 791 MHz – 821 MHz (downlink) according to wikipedia. The uplink frequencies are quite close, but the downlink ones are rather different, so I doubt it's only a software limitation ( 2016-09-09 17:00:53 +0300 )edit The droid-modem-l500d-eu package in adaptation0 contains the EU modem firmware. Jolla Cs have this package installed, while I guess Aqua Fishes ship with the droid-modem-l500d-in sibling (note: they obviously conflict with each other). You may try installing it, keeping in mind that this operation may brick your phone. The firmware should be flashed at the next reboot after issuing this command as root: add-preinit-oneshot /var/lib/platform-updates/flash-modem.sh (note: I haven't tried doing the inverse (-eu -> -in) on my Jolla C, and I won't. Pick this answer as an FYI and keep in mind that if something goes wrong you get to keep both pieces) EDIT: As I'm seeing on the net Europeans finally getting their imported Aqua Fishes from India and referring to this answer, I'd like to stress that this hasn't been tested at all and may very well not work. I posted this answer because it might be worth a try and might be better than referring to obscure guides on XDA. Try it only if you're ready to eventually kiss your new toy goodbye. more 1 Great !!! So there's no need to dump from /dev/... :-) It could be sufficient to download package, estract image, and flash it ( 2016-08-20 15:27:51 +0300 )edit 1 Some tutorial? ( 2016-08-20 18:09:19 +0300 )edit 3 Sounds like you just install that package from repos. Probably reflashes the modem during install/reboot for you. However, you should probably wait until that actually has any chance of giving you extra bands: https://together.jolla.com/question/137336/lte-with-jolla-c/ ( 2016-08-20 19:00:01 +0300 )edit 6 Assuming you are using a linux OS on you PC, I think you can try somenthing like this. Create and use a work dir: mkdir modem_firmware cd mode_firmware Donwload repomd.xml, which contains relative URI to packages list; pay attention to OS release in URL. You have to substitute "c2NlZ2xpYXU6cGFzc3dvcmQ=" with your credentials, base64 encoded: echo -n "scegliau:password" | base64 c2NlZ2xpYXU6cGFzc3dvcmQ= and you should see something like this: [...] <data type="primary"> <checksum type="sha256">72a2f27c7d57d941f573b3dfc4e9dd7881e3aca2eac86bdabf7ba393ac8c4290</checksum> <timestamp>1465392648</timestamp> <size>14873</size> <open-size>120284</open-size> <open-checksum type="sha256">23a7cd09c6d516f9618dd4dc1ca3c7280c10166b746f94fcb2dd456821fb4e41</open-checksum> <location href="repodata/72a2f27c7d57d941f573b3dfc4e9dd7881e3aca2eac86bdabf7ba393ac8c4290-primary.xml.gz"/> </data> [...] where "location" is the URI of packages list file. Then you can use that URI to do: curl -c /tmp/cookie.jar -b /tmp/cookie.jar -L -v -k -H 'Authorization: Basic c2NlZ2xpYXU6cGFzc3dvcmQ=' https://store-repository.jolla.com/releases/2.0.2.45/jolla-hw/adaptation-qualcomm-l500d/armv7hl/repodata/72a2f27c7d57d941f573b3dfc4e9dd7881e3aca2eac86bdabf7ba393ac8c4290-primary.xml.gz -o adaptation0_l500d.xml.gz grep droid-modem adaptation0_l500d.xml | grep href and you'll see something like: <location href="armv7hl/droid-modem-l500d-eu-0.0.8.2-10.8.1.jolla.armv7hl.rpm"/> <location href="armv7hl/droid-modem-l500d-in-0.0.8.2-10.8.1.jolla.armv7hl.rpm"/> In next step, you'll donwload EU version of firmware: curl -c /tmp/cookie.jar -b /tmp/cookie.jar -L -v -k -H 'Authorization: Basic c2NlZ2xpYXU6cGFzc3dvcmQ=' https://store-repository.jolla.com/releases/2.0.2.45/jolla-hw/adaptation-qualcomm-l500d/armv7hl/armv7hl/droid-modem-l500d-eu-0.0.8.2-10.8.1.jolla.armv7hl.rpm -o droid-modem-l500d-eu-0.0.8.2-10.8.1.jolla.armv7hl.rpm and then unpack it: rpm2cpio droid-modem-l500d-eu-0.0.8.2-10.8.1.jolla.armv7hl.rpm | cpio -idmv Now you have two usefull file in your work dir: ./boot/NON-HLOS.bin which are firmware and command to launch to flash it (inside shell script). Waiting my phone (it's in Mumbai now...) I'm hoping this can help you to try a reflash of modem firmware. ( 2016-08-20 19:09:10 +0300 )edit 6 The postinst script in the RPM schedules the reflash by itself, but it - understandably - does so only on upgrades. The reflash can be scheduled manually with: add-preinit-oneshot /var/lib/platform-updates/flash-modem.sh I haven't tried (and won't try) switching the modem firmware on my Jolla C to the -in variant so I can't say that this method works, but if it's true that the Jolla C and Aqua Fish share the exactly same hardware (and I don't have any reason to believe they don't) installing the -eu package (and scheduling the reflash) should work. Using the package is cleaner than flashing manually, and it also ensures that the modem firmware gets updated along with the rest of Sailfish. ( 2016-08-20 19:58:34 +0300 )edit Y think that the answer is here: https://www.frequencycheck.com/compatibility/bz7rcnY/intex-aqua-fish-td-lte-dual-sim/russia Aquafish seems to work partially with tele2 in Russia. For France it's the same problem :-( But you must have 3G on the aquafish ? more 1 Yeah, I have 3G but it so slooooow that I cant watch a 720p video. ( 2016-08-20 09:51:06 +0300 )edit Maybe someone with a JollaC can dump (with dd) content of /dev/block/platform/msm_sdcc.1/by-name/modem and someone else with AquaFish can try to flash it like an original NON-HLOS.bin. I'm still waiting for mine intex phone....when it'll arrive, probably, I'll try more To try it we need to someone who have JollaC :c ( 2016-08-19 19:54:45 +0300 )edit Sure we have to :-) ( 2016-08-20 09:46:41 +0300 )edit Path I posted is present on Jolla1, maybe on JollaC is different. There's no more need to dump that partition, as g7 said, firmware is present in a packet downloadable from repository. ( 2016-08-29 11:11:33 +0300 )edit I've found the guide for qualqomm devices, but its for Android devices... anybody want to try it? :D http://forum.xda-developers.com/crossdevice-dev/sony/thread-progress-please-leave-im-updating-t2871269 more unfortunately this is not working on the Intex Aqua Fish. I tried it with the following configuration: 01877 rf_bc_config RF Band Configuration 2307813334318580608 10000000000111000000000000000000001111111110000000001110000000 04548 rf_bc_config_div RF BC Configuration Diversity 2307813334318580608 10000000000111000000000000000000001111111110000000001110000000 00441 band_pref Band Class Preference 0x0380 0000001110000000 00946 band_pref_16_31 Expand Band Preference 16 To 32 Bits 0x0FF8 0000111111111000 02954 band_pref_32_63 Bits 32 To 63 Of Band Pref 537329664 100000000001110000000000000000 06828 LTE_BC_CONFIG LTE BC Config 18446744073709551615 1111111111111111111111111111111111111111111111111111111111111111 or like Jolla C: 524357 10000000000001000101 06829 lte_bc_config_div LTE BC Config DIV 18446744073709551615 1111111111111111111111111111111111111111111111111111111111111111 or like Jolla C: 524357 10000000000001000101 But it still can't connect to the LTE Band 20 (800MHz). Also additionally flashing the firmware to "EU" did not help either :-( It gives the same error as scegliau already posted about. So it really has to be a hardware limitation as some of you already told... ( 2016-11-19 21:43:35 +0300 )edit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15335409343242645, "perplexity": 13150.829542420555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00440.warc.gz"}
http://math.stackexchange.com/users/71070/jeremy
# Jeremy less info reputation 5 bio website location age member for 8 months seen Sep 23 at 2:32 profile views 4 # 1 Question 2 Diameter of $k$-regular graph # 113 Reputation +10 Diameter of $k$-regular graph This user has not answered any questions # 7 Accounts Stack Overflow 824 rep 412 Signal Processing 153 rep 3 Super User 126 rep 3 Mathematics 113 rep 5 Cryptography 111 rep 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5659289360046387, "perplexity": 12759.424477790713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164044912/warc/CC-MAIN-20131204133404-00063-ip-10-33-133-15.ec2.internal.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/29668
# Intensity Profiles Of Vibronic Transitions In Acetylene Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/29668 Files Size Format View 1995-RH-12.jpg 49.02Kb JPEG image Title: Intensity Profiles Of Vibronic Transitions In Acetylene Creators: Lucas, Donald; Solina, Stephani Ann B.; O'Brien, Jonathan P.; Field, R. W.; Polik, William F. Issue Date: 1995 Publisher: Ohio State University Abstract: A normal mode analysis of Acetylene in both the $\tilde{X}$ and $\tilde{A}$ state was performed based on experimentally available force $constants^{1,2}$. The analysis utilized the GF matrix approach from which normal coordinates were obtained. These normal coordinates were used to evaluate multi-dimensional overlap integrals of the wave functions for the two states. The results of these overlap integrals provided intensity contours for vibronic transitions which can be compared to experimental dispersed fluorescence $data^{3}$. Description: $^{1}$ G. Strey, I.M. Mills J. Mol. Spec. 59 103 (1976) $^{2}$ J.D. Tobiason, A.L. Utz, E.L. Sibert III, F.F. Crim. J. Chem. Phys, 99 5762 (1993) $^{3}$ S.A.B. Solina, J.P. O'Brien, R.W. Field, W.F. Polik. Ber. Bunsen, Phys. Chem. accepted. Author Institution: Massachusetts Institute of Technology, Cambridge, MA 02139; Hope College, Holland, MI 49423 URI: http://hdl.handle.net/1811/29668 Other Identifiers: 1995-RH-12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4693143963813782, "perplexity": 18048.70636452878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660871.1/warc/CC-MAIN-20160924173740-00266-ip-10-143-35-109.ec2.internal.warc.gz"}
http://intergraph.r-forge.r-project.org/howto.html
back to intergraph home # Short tutorial on using functions in package “intergraph” This is a short tutorial showing how to use functions in package “intergraph” using some example network data contained in the package. Contents: Typographical conventions: R input code is shown in frames with grey background, R output is shown in frames with white background with lines preceded with ## (two hash symbols). For example: # This is some input code with output below x <- 2 + 2 x ## [1] 4 RMarkdown source of this document can be found here. ## Example networks Package intergraph contains four example networks: • Objects exNetwork and exIgraph contain the same directed network as objects of class “network” and “igraph” respectively. • Objects exNetwork2 and exIgraph2 contain the same undirected network as objects of class “network” and “igraph” respectively. All four datasets contain: • A vertex attribute label with vertex labels. These are letters from a to o. • An edge attribute label with edge labels. These are pasted letters of the adjecent nodes. • Network-level attribute layout storing a function that computes the vertex placement for plotting. It is a copy of layout.fruchterman.reingold function from package igraph. We will use them in the examples below. To show the data, first load the packages. library(intergraph) library(network) ## network: Classes for Relational Data Version 1.7.2 created on March 15, ## 2013. copyright (c) 2005, Carter T. Butts, University of California-Irvine ## Mark S. Handcock, University of Washington David R. Hunter, Penn State ## University Martina Morris, University of Washington For citation ## information, type citation("network"). Type help("network-package") to ## get started. library(igraph) ## Attaching package: 'igraph' ## The following objects are masked from 'package:network': ## ## get.edge.attribute, get.edges, get.vertex.attribute, is.bipartite, ## is.directed, list.edge.attributes, list.vertex.attributes, %s%, ## set.edge.attribute, set.vertex.attribute Now, these are the summaries of the “igraph” objects: summary(exIgraph) ## IGRAPH D--- 15 11 -- ## attr: layout (g/x), label (v/c), label (e/c) summary(exIgraph2) ## IGRAPH U--- 15 11 -- ## attr: layout (g/x), label (v/c), label (e/c) These are the summaries of the “network” objects: exNetwork ## Network attributes: ## vertices = 15 ## directed = TRUE ## hyper = FALSE ## loops = FALSE ## multiple = FALSE ## bipartite = FALSE ## total edges= 11 ## missing edges= 0 ## non-missing edges= 11 ## ## Vertex attribute names: ## label vertex.names exNetwork2 ## Network attributes: ## vertices = 15 ## directed = FALSE ## hyper = FALSE ## loops = FALSE ## multiple = FALSE ## bipartite = FALSE ## total edges= 11 ## missing edges= 0 ## non-missing edges= 11 ## ## Vertex attribute names: ## label vertex.names Networks are shown below using the following code: layout(matrix(1:4, 2, 2, byrow = TRUE)) op <- par(mar = c(1, 1, 2, 1)) # compute layout coords <- layout.fruchterman.reingold(exIgraph) plot(exIgraph, main = "exIgraph", layout = coords) plot(exIgraph2, main = "exIgraph2", layout = coords) plot(exNetwork, main = "exNetwork", displaylabels = TRUE, coord = coords) plot(exNetwork2, main = "exNetwork2", displaylabels = TRUE, coord = coords) par(op) ## Functions asNetwork and asIgraph Conversion of network objects between classes “network” and “igraph” can be performed using functions asNetwork and asIgraph. ### network => igraph Converting “network” objects to “igraph” is done by calling function asIgraph on a “network” object: # check class of 'exNetwork' class(exNetwork) ## [1] "network" # convert to 'igraph' g <- asIgraph(exNetwork) # check class of the result class(g) ## [1] "igraph" Check if edgelists of the objects are identical el.g <- get.edgelist(g) el.n <- as.matrix(exNetwork, "edgelist") identical(as.numeric(el.g), as.numeric(el.n)) ## [1] TRUE ### igraph => network Converting “igraph” objects to “network” is done by calling function asNetwork on an “igraph” object: net <- asNetwork(exIgraph) ## Warning: network attribute 'layout' is a function, print the result might ## give errors Note the warning because of a “non-standard” network attribute layout, which is a function. Printing “network” objects does not handle non-standard attributes very well. However, all the data and attributes are copied correctly. Check if edgelists of the objects are identical el.g2 <- get.edgelist(exIgraph) el.n2 <- as.matrix(net, "edgelist") identical(as.numeric(el.g2), as.numeric(el.n2)) ## [1] TRUE ### Handling attributes Objects of class “igraph” and “network”, apart from storing actual network data (vertexes and edges), allow for adding attributes of vertexes, edges, and attributes of the network as a whole (called “network attributes” or “graph attributes” in the nomenclatures of packages “network” and “igraph” respectively). Vertex and edge attributes are used by “igraph” and “network” in a largely similar fashion. However, network-level attributes are used differently. Objects of class “network” use network-level attributes to store various metadata, e.g., network size, whether the network is directed, is bipartite, etc. In “igraph” this information is stored separately. The above difference affects the way the attributes are copied when we convert “network” and “igraph” objects into one another. Both functions asNetwork and asIgraph have an additional argument attrmap that is used to specify how vertex, edge, and network attributes are copied. The attrmap argument requires a data frame. Rows of that data frame specify rules of copying/renaming different attributes. The data frame should have the following columns (all of class “character”): • type: one of “network”, “vertex” or “edge”, whether the rule applies to network, vertex or edge attribute. • fromslc: name of the which we are converting from • fromattr: name of the attribute in the object we are converting from • tocls: name of the class of the object we are converting to • toattr: name of the attribute in the object we are converting to The default rules are returned by a function attrmap(), these are: attrmap() ## type fromcls fromattr tocls toattr ## 1 network network directed igraph <NA> ## 2 network network bipartite igraph <NA> ## 3 network network loops igraph <NA> ## 4 network network mnext igraph <NA> ## 5 network network multiple igraph <NA> ## 6 network network n igraph <NA> ## 7 network network hyper igraph <NA> ## 8 vertex igraph name network vertex.names For example, the last row specifies a rule that when an object of class “igraph” is converted to class “network”, then a vertex attribute name in the “igraph” object will be copied to a vertex attribute called vertex.names in the resulting object of class “network. If the column toattr contains an NA, that means that the corresponding attribute is not copied. For example, the first row specifies a rule that when an object of class "network” is converted to class “igraph”, then a network attribute directed in the “network” object is not copied to the resulting object of class “igraph”. Users can customize the rules, or add new ones, by constructing similar data frames and supplying them through argument attrmap to functions asIgraph and asNetwork. ## Network objects to/from data frames Function asDF can be used to convert network object (of class “igraph” or “network”) to a list of two data frames: l <- asDF(exIgraph) str(l) ## List of 2 ## $edges :'data.frame': 11 obs. of 3 variables: ## ..$ V1 : num [1:11] 2 3 4 5 6 8 10 11 12 13 ... ## ..$V2 : num [1:11] 1 1 1 1 7 9 11 12 13 14 ... ## ..$ label: chr [1:11] "ba" "ca" "da" "ea" ... ## $vertexes:'data.frame': 15 obs. of 2 variables: ## ..$ intergraph_id: int [1:15] 1 2 3 4 5 6 7 8 9 10 ... ## ..$label : chr [1:15] "a" "b" "c" "d" ... The resulting list has two components edges and vertexes. The edges component is essentially an edge list containing ego and alter ids in the first two columns. The remaining columns store edge attributes (if any). For our example data it is l$edges ## V1 V2 label ## 1 2 1 ba ## 2 3 1 ca ## 3 4 1 da ## 4 5 1 ea ## 5 6 7 fg ## 6 8 9 hi ## 7 10 11 jk ## 8 11 12 kl ## 9 12 13 lm ## 10 13 14 mn ## 11 14 12 nl The vertexes component contains data on vertexes with vertex id (the same that is used in the first two column of edges) is stored in the first two columns. The remaining columns store vertex attributes (if any). For our example data it is: l$vertexes ## intergraph_id label ## 1 1 a ## 2 2 b ## 3 3 c ## 4 4 d ## 5 5 e ## 6 6 f ## 7 7 g ## 8 8 h ## 9 9 i ## 10 10 j ## 11 11 k ## 12 12 l ## 13 13 m ## 14 14 n ## 15 15 o Functions asNetwork and asIgraph can also be used to create network objects from data frames such as those above. The first argument should be an edge list data frame. Optional argument vertices expectes data frames with vertex data (just like l$vertexes). Additionally we need to specify whether the edges should be interpreted as directed or not through the argument directed. For example, to create an object of class “network” from the dataframes created above from object exIgraph we can: z <- asNetwork(l$edges, directed = TRUE, l$vertexes) z ## Network attributes: ## vertices = 15 ## directed = TRUE ## hyper = FALSE ## loops = FALSE ## multiple = FALSE ## bipartite = FALSE ## total edges= 11 ## missing edges= 0 ## non-missing edges= 11 ## ## Vertex attribute names: ## label vertex.names This is actually what basically happens when we call asNetwork(exIgraph)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2010129690170288, "perplexity": 13258.346073871146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/452273/derivation-of-the-velocity-for-the-expectation-value-of-position-in-quantum-mech?answertab=oldest
# Derivation of the velocity for the expectation value of position in quantum mechanics I am currently reading Griffiths book on quantum mechanics and I don't understand the derivation for the time derivative of the expectation value of the position. The part am I stuck is after an integration by parts I have to calculate $$x(\psi^*\dfrac{d\psi}{dx} - \psi\dfrac{d\psi^*}{dx})\big|^{\infty}_{-\infty}$$. I know that for the wave function to be normalized, it needs to go to zero as x approaches infinity. With that in mind, I calculate $$\infty*0-(-\infty*0)$$ which is supposed to equal $$0$$. What am I doing wrong? • maybe consider if $x\psi(x)$ or $x\psi’(x)$ need to go to $0$? – ZeroTheHero Jan 5 at 16:15 • @ZeroTheHero The only thing I can think of is that in the book it says that $\psi$ goes to zero faster than $\dfrac{1}{\sqrt{x}}$. Does this mean that $\psi$ goes to zero faster than $x$ goes to infinity, therefore $x\psi$ goes to zero? – alexk745 Jan 5 at 16:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385588765144348, "perplexity": 71.26009387916915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00324.warc.gz"}
http://math.stackexchange.com/questions/42533/decreasing-sequence-of-sets
Decreasing sequence of sets $X$ is a topological space. Let $A_n$ be a non-increasing sequence of subsets of this space: $$A_n\supseteq A_{n+1}$$ and all $A_n$ are compact sets. Is it true that $A_\infty = \bigcap_n A_n$ is empty if and only if $A_N$ is empty for some $N$? If yes, how to prove it? Moreover, is $A_\infty$ compact? - In the last part, you say "Moreover, if $A_\infty$ is compact?" do you mean "Moreover, is $A_\infty$ compact?" or the same question under the assumption that the intersection is compact? –  Asaf Karagila Jun 1 '11 at 10:13 @Asaf: I don't understand your question. If the intersection is empty then it certainly is compact and my answer applies. If the intersection is non-empty then certainly no $A_n$ is empty. The only way I can make sense of this part of the question is: "Is the intersection of countably many nested compact sets compact"? –  t.b. Jun 1 '11 at 10:24 @Asaf, I thought that it was right usage of English. I was interested, is $A_\infty$ a compact or not. –  Ilya Jun 1 '11 at 11:08 @Theo, Gortaur: I merely wanted to be sure it was not a typo. There are plenty of non-English speakers who might make such mistake. :-) –  Asaf Karagila Jun 1 '11 at 11:49 You need to assume that $A_1$ is compact and that the sets $A_{n}$ are closed (which is of course automatic under your assumption if $X$ is Hausdorff). A silly counter-example when the $A_n$ aren't closed: Take $A_n = [n,\infty)$ in $X = \mathbb{R}$ with the trivial topology. If the $A_n$ are closed sets, note that $\bigcap A_n = \emptyset$ implies that $U_n = A_1 \smallsetminus A_n$ is an open cover of $A_1$ by passing to complements. Applying compactness of $A_1$ we see that finitely many of the $U_n$ already cover $A_1$. Passing to complements again and using that the sets are nested $A_n \supseteq A_{n+1}$, we see that $A_N$ must be empty for $N$ large enough. Of course, if we are assuming each $A_n$ closed and $A_1$ compact, then $A_{\infty}$ is compact since closed subsets of a compact set are compact. @Theo, could you please tell me: if $X$ is a compact Borel space and $f:X\to [0,1]$ is continuous on $X$, does it mean that $(x\in X:f(x) = 1)$ is compact in $X$? –  Ilya Jun 1 '11 at 11:13 @Gortaur: Yes, as $\{1\} \subset [0,1]$ is closed, so is $f^{-1}(\{1\}) = \{f = 1\}$ by continuity. Closed subsets $F$ of compact spaces $X$ are compact (If $\{U_{i} \cap F\}$ is an open cover of $F$ with $U_i \subset X$ open then $\{X \smallsetminus F\} \cup \{U_i\}$ is an open cover of $X$). –  t.b. Jun 1 '11 at 11:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637604355812073, "perplexity": 184.72283247038325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00267-ip-10-236-191-2.ec2.internal.warc.gz"}
https://wiki.sagemath.org/factorization_of_integers_of_special_forms?action=diff&rev1=15&rev2=16
Differences between revisions 15 and 16 ⇤ ← Revision 15 as of 2007-01-23 03:15:50 → Size: 770 Editor: TimothyClemans Comment: ← Revision 16 as of 2007-01-23 03:24:38 → ⇥ Size: 767 Editor: TimothyClemans Comment: Deletions are marked like this. Additions are marked like this. Line 5: Line 5: * $p^p \pm 1$ where $p$ is a prime number and $p < 180$. [http://homes.cerias.purdue.edu/~ssw/bell/r1] * $p^p \pm 1$ where $p$ is a prime number and $p < 180$. [http://homes.cerias.purdue.edu/~ssw/bell] 1. a^n \pm 1 for a = 2, 3, 5, 6, 7, 10, 11, 12 and large exponents n [http://homes.cerias.purdue.edu/~ssw/cun/index.html] 2. a^n \pm 1 for a ≤ 13 and a not a perfect number [http://wwwmaths.anu.edu.au/~brent/factors.html] 3. 2^n \pm 1 for 1200 < n < 10000 [http://www.euronet.nl/users/bota/medium-p.htm] 4. 10^n \pm 1 for n ≤ 100 [http://www.swox.com/gmp/repunit.html] 5. p^p \pm 1 where p is a prime number and p < 180. [http://homes.cerias.purdue.edu/~ssw/bell] 6. 2^{2^n} + 1 (Fermat numbers) [http://www.prothsearch.net/fermat.html] 7. 2^{3^n} \pm 1 [http://www.alpertron.com.ar/MODFERM.HTM] 8. Fibonacci numbers (F_n) and Lucas numbers (L_n) for n < 10000 [http://home.att.net/~blair.kelly/mathematics/fibonacci/] factorization_of_integers_of_special_forms (last edited 2008-11-14 13:41:51 by localhost)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6318719387054443, "perplexity": 3755.651737420445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00475.warc.gz"}
https://terrytao.wordpress.com/2012/12/11/mixing-for-progressions-in-non-abelian-groups/
I’ve just uploaded to the arXiv my paper “Mixing for progressions in non-abelian groups“, submitted to Forum of Mathematics, Sigma (which, along with sister publication Forum of Mathematics, Pi, has just opened up its online submission system). This paper is loosely related in subject topic to my two previous papers on polynomial expansion and on recurrence in quasirandom groups (with Vitaly Bergelson), although the methods here are rather different from those in those two papers. The starting motivation for this paper was a question posed in this foundational paper of Tim Gowers on quasirandom groups. In that paper, Gowers showed (among other things) that if ${G}$ was a quasirandom group, patterns such as ${(x,xg,xh, xgh)}$ were mixing in the sense that, for any four sets ${A,B,C,D \subset G}$, the number of such quadruples ${(x,xg,xh, xgh)}$ in ${A \times B \times C \times D}$ was equal to ${(\mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^3}$, where ${\mu(A) := |A| / |G|}$, and ${o(1)}$ denotes a quantity that goes to zero as the quasirandomness of the group goes to infinity. In my recent paper with Vitaly, we also considered mixing properties of some other patterns, namely ${(x,xg,gx)}$ and ${(g,x,xg,gx)}$. This paper is concerned instead with the pattern ${(x,xg,xg^2)}$, that is to say a geometric progression of length three. As observed by Gowers, by applying (a suitably quantitative version of) Roth’s theorem in (cosets of) a cyclic group, one can obtain a recurrence theorem for this pattern without much effort: if ${G}$ is an arbitrary finite group, and ${A}$ is a subset of ${G}$ with ${\mu(A) \geq \delta}$, then there are at least ${c(\delta) |G|^2}$ pairs ${(x,g) \in G}$ such that ${x, xg, xg^2 \in A}$, where ${c(\delta)>0}$ is a quantity depending only on ${\delta}$. However, this argument does not settle the question of whether there is a stronger mixing property, in that the number of pairs ${(x,g) \in G^2}$ such that ${(x,xg,xg^2) \in A \times B \times C}$ should be ${(\mu(A)\mu(B)\mu(C)+o(1)) |G|^2}$ for any ${A,B,C \subset G}$. Informally, this would assert that for ${x, g}$ chosen uniformly at random from ${G}$, the triplet ${(x, xg, xg^2)}$ should resemble a uniformly selected element of ${G^3}$ in some weak sense. For non-quasirandom groups, such mixing properties can certainly fail. For instance, if ${G}$ is the cyclic group ${G = ({\bf Z}/N{\bf Z},+)}$ (which is abelian and thus highly non-quasirandom) with the additive group operation, and ${A = \{1,\ldots,\lfloor \delta N\rfloor\}}$ for some small but fixed ${\delta > 0}$, then ${\mu(A) = \delta + o(1)}$ in the limit ${N \rightarrow \infty}$, but the number of pairs ${(x,g) \in G^2}$ with ${x, x+g, x+2g \in A}$ is ${(\delta^2/2 + o(1)) |G|^2}$ rather than ${(\delta^3+o(1)) |G|^2}$. The problem here is that the identity ${(x+2g) = 2(x+g) - x}$ ensures that if ${x}$ and ${x+g}$ both lie in ${A}$, then ${x+2g}$ has a highly elevated likelihood of also falling in ${A}$. One can view ${A}$ as the preimage of a small ball under the one-dimensional representation ${\rho: G \rightarrow U(1)}$ defined by ${\rho(n) := e^{2\pi i n/N}}$; similar obstructions to mixing can also be constructed from other low-dimensional representations. However, by definition, quasirandom groups do not have low-dimensional representations, and Gowers asked whether mixing for ${(x,xg,xg^2)}$ could hold for quasirandom groups. I do not know if this is the case for arbitrary quasirandom groups, but I was able to settle the question for a specific class of quasirandom groups, namely the special linear groups ${G := SL_d(F)}$ over a finite field ${F}$ in the regime where the dimension ${d}$ is bounded (but is at least two) and ${F}$ is large. Indeed, for such groups I can obtain a count of ${(\mu(A) \mu(B) \mu(C) + O( |F|^{-\min(d-1,2)/8} )) |G|^2}$ for the number of pairs ${(x,g) \in G^2}$ with ${(x, xg, xg^2) \in A \times B \times C}$. In fact, I have the somewhat stronger statement that there are ${(\mu(A) \mu(B) \mu(C) \mu(D) + O( |F|^{-\min(d-1,2)/8} )) |G|^2}$ pairs ${(x,g) \in G^2}$ with ${(x,xg,xg^2,g) \in A \times B \times C \times D}$ for any ${A,B,C,D \subset G}$. I was also able to obtain a partial result for the length four progression ${(x,xg,xg^2, xg^3)}$ in the simpler two-dimensional case ${G = SL_2(F)}$, but I had to make the unusual restriction that the group element ${g \in G}$ was hyperbolic in the sense that it was diagonalisable over the finite field ${F}$ (as opposed to diagonalisable over the algebraic closure ${\overline{F}}$ of that field); this amounts to the discriminant of the matrix being a quadratic residue, and this holds for approximately half of the elements of ${G}$. The result is then that for any ${A,B,C,D \subset G}$, one has ${(\frac{1}{2} \mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^2}$ pairs ${(x,g) \in G}$ with ${g}$ hyperbolic and ${(x,xg,xg^2,xg^3) \subset A \times B \times C \times D}$. (Again, I actually show a slightly stronger statement in which ${g}$ is restricted to an arbitrary subset ${E}$ of hyperbolic elements.) For the length three argument, the main tools used are the Cauchy-Schwarz inequality, the quasirandomness of ${G}$, and some algebraic geometry to ensure that a certain family of probability measures on ${G}$ that are defined algebraically are approximately uniformly distributed. The length four argument is significantly more difficult and relies on a rather ad hoc argument involving, among other things, expander properties related to the work of Bourgain and Gamburd, and also a “twisted” version of an argument of Gowers that is used (among other things) to establish an inverse theorem for the ${U^3}$ norm. I give some details of these arguments below the fold. — 1. Length three progressions — One can view the mixing property of length three progressions as an assertion about the unbiased nature of sums of the form $\displaystyle \sum_{x,g \in G} f_0(x) f_1(xg) f_2(xg^2) \ \ \ \ \ (1)$ for various bounded functions ${f_0,f_1,f_2: G \rightarrow {\bf C}}$. (To obtain the stronger statement in which ${g}$ is also restricted to some set ${D}$, one would throw in an additional function ${f_3(g)}$, but let us ignore that generalisation here for sake of simplicity.) Roughly speaking, mixing means that the sum (1) should be small if at least one of the ${f_0,f_1,f_2}$ have small mean. One way in which mixing fails would be if there was an unexpected constraint between ${x, xg, xg^2}$, for instance if there was a constraint of the form $\displaystyle \phi_0(x) + \phi_1(xg) + \phi_2(xg^2) = 0 \ \ \ \ \ (2)$ for all ${x, g\in G}$ and some non-trivial functions ${\phi_0,\phi_1,\phi_2:G \rightarrow {\bf R}/{\bf Z}}$ (not necessarily homomorphisms). Then one could make the sum (1) for ${f_j(x) := e^{2\pi i \phi_j(x)}}$ exhibit no cancellation whatsoever, even though one would expect ${f_0,f_1,f_2}$ to have small mean if the ${\phi_i}$ were sufficiently non-trivial. (This observation is basically what underlies the failure of mixing in the abelian case.) Thus, this suggests the toy problem of ruling out constraints of the form (2) when ${G}$ is a special linear group ${G = SL_d(F)}$. This toy problem (which can be viewed as ruling out the “${100\%}$ structured” version of the mixing problem, which is about excluding a more general “${1\%}$ structured” situation) is significantly weaker than the general result, but it turns out that the proof strategy for the toy problem can be adapted to the general case (basically by replacing many of the algebraic manipulations below with a suitable analogue involving the Cauchy-Schwarz inequality). Let’s see how this works. Suppose for contradiction that we had a constraint of the form (2). In the abelian case, standard “double differencing” arguments let one conclude that ${\phi_0,\phi_1,\phi_2}$ are affine homomorphisms; see e.g. Section 2 of these lecture notes. It turns out that essentially the same argument can be applied in the nonabelian case, but one acquires a nonabelian “twist” which can be exploited to give additional mixing. Shifting ${x}$ by ${g}$, we conclude that $\displaystyle \phi_0(xg^{-1}) + \phi_1(x) + \phi_2(xg) = 0$ for all ${x,g \in G}$. Now we use some algebraic manipulation to eliminate ${\phi_1}$. If we replace ${g}$ by ${ga}$ for some ${a \in G}$, we also have $\displaystyle \phi_0(xa^{-1} g^{-1}) + \phi_1(x) + \phi_2(xga) = 0;$ subtracting, we conclude that $\displaystyle \partial_{ga^{-1}g^{-1}} \phi_0(xg^{-1}) + \partial_a \phi_2(xg) = 0$ where ${\partial_a \phi(x) := \phi(xa) - \phi(x)}$ is the “derivative” of ${\phi}$ in the ${a}$ direction. Setting ${y := xg}$, we conclude that $\displaystyle \partial_{ga^{-1}g^{-1}} \phi_0(yg^{-2}) + \partial_a \phi_2(y) = 0$ for all ${y,g,a \in G}$. We can now perform a similar manipulation to eliminate ${\phi_2}$. Replacing ${g}$ by ${hg}$ for some ${h \in G}$, we have $\displaystyle \partial_{hga^{-1}g^{-1}h^{-1}} \phi_0(yg^{-1}h^{-1}g^{-1}h^{-1}) + \partial_a \phi_2(y) = 0.$ Subtracting, we conclude that $\displaystyle \partial_{ga^{-1}g^{-1}} \phi_0(yg^{-2}) = \partial_{hga^{-1}g^{-1}h^{-1}} \phi_0(yg^{-1}h^{-1}g^{-1}h^{-1})$ for all ${y,g,a,h \in G}$. We can clean this up a bit by setting ${b := ga^{-1} g}$ and ${z := yg^{-2}}$, leading to $\displaystyle \partial_{b} \phi_0(z) = \partial_{hbh^{-1}} \phi_0(zgh^{-1}g^{-1}h^{-1})$ for all ${z,g,b,h \in G}$. Next, we exploit the fact that the quantity ${hbh^{-1}}$ appearing on the right-hand side does not change if one replaces ${h}$ by ${hc}$ for any ${c}$ in the centraliser ${Z(b) := \{ c\in G: cb=bc\}}$ of ${b}$. If we then replace ${h}$ by ${hc}$ in the above equation, we conclude that $\displaystyle \partial_{b} \phi_0(z) = \partial_{hbh^{-1}} \phi_0(zgc^{-1}h^{-1}g^{-1}h^{-1}c^{-1})$ for all ${z,g,b,h \in G}$ and ${c \in Z(b)}$. Let us now fix ${h,b}$, and let ${A_{h,b} \subset G}$ denote the set $\displaystyle A_{h,b} := \{ gc^{-1}h^{-1}g^{-1}h^{-1} c^{-1}: g \in G; c \in Z(b) \}.$ The above identity then tells us that for ${z \in G}$ and ${a \in A_{h,b}}$, the quantity ${\partial_{hbh^{-1}} \phi_0( z a )}$ is in fact independent of ${a}$. So if one can show that ${A_{h,b}}$ is “large” (e.g. has positive density in ${G}$), then this suggests that the function ${\partial_{hbh^{-1}} \phi_0}$ has to be basically constant (and with quasirandomness, one can make this statement precise). Further application of quasirandomness then lets one conclude that ${\phi_0}$ is itself constant, at which point it is not difficult to ensure that ${\phi_1}$ and ${\phi_2}$ are constant as well, rendering the entire constraint (2) trivial. In the ${d=2}$ case, one can establish this by explicit (but ad hoc) computations (taking advantage of the special role of the trace in the ${d=2}$ case, for instance it is the case that two (non-central) matrices in ${SL_2}$ are conjugate iff they have the same trace, and there is also the nice fact that a matrix in ${SL_2}$ and its inverse have the same trace). For general ${d}$, this largeness of ${A_{h,b}}$ can be established by algebraic geometry methods; the key is to show that the map ${(g,c) \mapsto gc^{-1}h^{-1}g^{-1}h^{-1}c^{-1}}$ from ${G \times Z(b)}$ to ${G}$ is dominant in the sense that its image is Zariski-dense in ${G}$. In the case of ${SL_d(F)}$, this can be accomplished by an inspection of the derivative of this map at the identity. (I expect that similar things can be done in other almost simple algebraic groups, but did not attempt to do so in this paper.) — 2. Length four progressions — It is remarkably difficult to extend the Cauchy-Schwarz based length three arguments to length four or higher in the nonabelian setting. In the abelian case, every application of the Cauchy-Schwarz inequality reduces a certain “complexity” of the average being studied; in terms of raw length, the average may look much more fearsome after Cauchy-Schwarz, but after making some changes of variable and collecting terms, one can arrive at an average that is actually simpler in certain key respects than the original average. But it turns out that in the nonabelian setting, the process of making changes of variable and collecting terms introduces additional complexity into the average that counteracts the abelian phenomenon of complexity reduction. This was already apparent in the length three setting, when one started to see messy looking expressions such as ${zgc^{-1}h^{-1}g^{-1}h^{-1}c^{-1}}$ emerge, but the argument was short enough that one could conclude before these expressions spiraled out of control. In the case of length four progressions, the nonabelian complications seem to outrun the simplifying process, and I was not able to end up with a tractable average after a finite number of applications of the Cauchy-Schwarz inequality. Instead, we leverage the abelian additive combinatorics theory by working primarily with a metabelian subgroup of ${SL_2(F)}$, namely the Borel subgroup ${B}$ of upper-triangular elements of ${SL_2(F)}$. Note that every hyperbolic element of ${SL_2(F)}$ can be conjugated into ${B}$, which explains our restriction to the hyperbolic elements. By using the conjugates of ${B}$ to trace out all the hyperbolic elements of ${SL_2(F)}$ more or less evenly, matters soon reduce to establishing a “relative mixing” property for the pattern ${(x,xg,xg^2,xg^3)}$ on ${B}$. To explain this relative mixing, first observe that one does not have complete mixing for this pattern in ${B}$, due to the presence of an abelian quotient ${F^\times}$ of ${B}$, formed by mapping ${\begin{pmatrix} t & a \\ 0 & t^{-1} \end{pmatrix}}$ to ${t}$, and one can then pull back the failure of mixing on ${F^\times}$ (e.g. by counting length four progressions inside a single fixed geometric progression) to demonstrate failure of mixing of ${B}$. However, one can hope to show that this is the only obstruction to mixing, in the sense that we can get sums such as $\displaystyle \sum_{x,g \in B} f_0(x) f_1(xg) f_2(xg^2) f_3(xg^3) \ \ \ \ \ (3)$ to be small if at least one of ${f_0,f_1,f_2,f_3: B \rightarrow {\bf C}}$ pushes down to zero on ${F^\times}$, or equivalently if it has mean zero on every coset of the kernel of this quotient, which is the group ${U}$ of unipotent matrices in ${B}$. In order to upgrade relative mixing on ${B}$ and its conjugates back to full mixing on ${G}$, we need a certain expansion property of a given conjugacy class ${C(a)}$ of a non-central element ${a \in G}$. This property asserts that if ${f \in \ell^2(G)}$ has mean zero, then after convolving ${f}$ with the uniform probability measure on such a conjugacy class, the ${\ell^2}$ norm drops by a positive power of ${|F|}$. This type of expansion is related to the work of Bourgain and Gamburd (in which the conjugacy class is replaced by a set of bounded cardinality, and the drop in ${L^2}$ norm is proportionally smaller as a result), and uses some of the same tools in the proof (in particular the “escape from subvarieties” phenomenon of Eskin, Mozes, and Oh)). (On the other hand, the \href{combinatorial product theory of Helfgott}, which plays a central role in the work of Bourgain and Gamburd, is not needed here, because in this setting one only needs to understand products of algebraic sets, such as conjugacy classes, rather than arbitrary subsets.) By foliating ${B}$ into cosets of ${U}$ (which is isomorphic to ${F}$), one can after some straightforward calculations rewrite the sum into a sum which is basically of the form $\displaystyle \sum_{s,t \in F^\times} \sum_{a,b \in F} f_{0,s}(a) f_{1,st}(a+b) f_{2,st^2}(a+(1+t^2)b) f_{3,st^3}(a+(1+t^2+t^4)b)$ for some family ${f_{i,s}: F \rightarrow {\bf C}}$ of bounded functions for ${i=0,1,2,3}$ and ${s \in F^\times}$. The inner sum resembles a count of four term progressions, a statistic which has been studied by higher order Fourier-analytic methods since the work of Gowers on Szemerédi’s theorem for length four progressions. In principle one could analyse these expressions using the inverse ${U^3}$ theorem of Ben Green and myself, but this would require a large amount of manipulation of two-step nilsequences, which would lead to a number of technical complications. Instead, we take a “softer” approach, in which we set up some of the quadratic Fourier analysis of Gowers that goes into the proof of the inverse ${U^3}$ theorem, but stop well before the nilsequences come in. More precisely, we use a variant of the basic fact in quadratic Fourier analysis (already present in the previously mentioned paper of Gowers) that if a function ${f}$ has large ${U^3}$ norm, then for many shifts ${h}$, the derivative ${\Delta_h f(x) := f(x+h) \overline{f(x)}}$ correlates with a linear phase ${e(\xi(h) x)}$, and furthermore that this phase ${\xi}$ is approximately linear in the sense that there are many quadruples ${(h_1,h_2,h_3,h_4)}$ with ${h_1+h_2=h_3+h_4}$ and ${\xi(h_1)+\xi(h_2) = \xi(h_3)+\xi(h_4)}$. Applying this analysis to the above sum, we see that if that sum is large, then one obtains a number of approximate linearity relationships between the frequencies ${\xi}$ for which ${\Delta_h f_{i,s}(x)}$ correlates with ${e(\xi x)}$. On the other hand, for each fixed ${h,i,s}$, Plancherel’s theorem tells us that there can only be a bounded number of frequencies ${\xi}$ for which the correlation between ${\Delta_h f_{i,s}(x)}$ and ${e(\xi x)}$ is large. Varying ${s,t}$ suitably, this eventually creates so many linear constraints between these frequencies (with coefficients that vary in a sufficiently nonlinear fashion to ensure a high rank) that a contradiction can be derived, unless all the frequencies involved vanish. But this case can be handled by a variant of the above arguments, though in this case one needs to vary ${s,t}$ inside a moderately large two-dimensional arithmetic progression before one can finally reduce to a contradiction, which requires invoking the multidimensional Szemereédi theorem in order to ensure that all the pairs ${(s,t)}$ used are “good” in a certain technical sense. It is this last step which makes the error terms in the length four progression results to be qualitative (of order ${o(1)}$) rather than quantitative (of order ${O(|F|^{-c})}$). I feel that there should be a better approach than the rather ad hoc approach employed here which should lead to better bounds (and which would more easily extend to other groups than ${SL_2(F)}$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 217, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495474696159363, "perplexity": 151.95807457304136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987034.19/warc/CC-MAIN-20150728002307-00195-ip-10-236-191-2.ec2.internal.warc.gz"}
https://cran.microsoft.com/snapshot/2021-03-04/web/packages/epiR/vignettes/epiR_sample_size.html
# Sample Size Calculations Using epiR ### Prevalence estimation The expected seroprevalence of brucellosis in a population of cattle is thought to be in the order of 15%. How many cattle need to be sampled and tested to be 95% certain that our seroprevalence estimate is within 20% (i.e. 0.20 $$\times$$ 0.15 = 0.03, 3%) of the true population value, assuming use of a test with perfect sensitivity and specificity? This formula requires the population size to be specified so we set N to a large number, 1,000,000: library(epiR) epi.sssimpleestb(N = 1E+06, Py = 0.15, epsilon.r = 0.20, se = 1, sp = 1, nfractional = FALSE, conf.level = 0.95) #> [1] 545 A total of 545 cows are required to meet the specifications of the study. ### Prospective cohort study A prospective cohort study of dry food diets and feline lower urinary tract disease (FLUTD) in mature male cats is planned. A sample of cats will be selected at random from the population and owners who agree to participate in the study will be asked to complete a questionnaire at the time of enrolment. Cats enrolled into the study will be followed for at least 5 years to identify incident cases of FLUTD. The investigators would like to be 0.80 certain of being able to detect when the risk ratio of FLUTD is 1.4 for cats habitually fed a dry food diet, using a 0.05 significance test. Previous evidence suggests that the incidence risk of FLUTD in cats not on a dry food (i.e. ‘other’) diet is around 50 per 1000 per year. Assuming equal numbers of cats on dry food and other diets are sampled, how many cats should be sampled overall? epi.sscohortt(irexp1 = 50/1000, irexp0 = 70/1000, FT = 5, n = NA, power = 0.80, r = 1, design = 1, sided.test = 2, nfractional = FALSE, conf.level = 0.95)$n.total #> [1] 2080 A total of 2080 subjects are required (1040 exposed and 1040 unexposed). ### Case-control study A case-control study of the relationship between white pigmentation around the eyes and ocular squamous cell carcinoma in Hereford cattle is planned. A sample of cattle with newly diagnosed squamous cell carcinoma will be compared for white pigmentation around the eyes with a sample of controls. Assuming an equal number of cases and controls, how many study subjects are required to detect an odds ratio of 2.0 with 0.80 power using a two-sided 0.05 test? Previous surveys have shown that around 0.30 of Hereford cattle without squamous cell carcinoma have white pigmentation around the eyes. epi.sscc(OR = 2.0, p0 = 0.30, n = NA, power = 0.80, r = 1, rho = 0, design = 1, sided.test = 2, conf.level = 0.95, method = "unmatched", nfractional = FALSE, fleiss = FALSE)$n.total #> [1] 282 If the true odds for squamous cell carcinoma in exposed subjects relative to unexposed subjects is 2.0, we will need to enrol 141 cases and 141 controls (282 cattle in total) to reject the null hypothesis that the odds ratio equals one with probability (power) 0.80. The Type I error probability associated with this test of this null hypothesis is 0.05. ### Non-inferiority trial Suppose a pharmaceutical company would like to conduct a clinical trial to compare the efficacy of two antimicrobial agents when administered orally to patients with skin infections. Assume the true mean cure rate of the treatment is 0.85 and the true mean cure rate of the control is 0.65. We consider a difference of less than 0.10 in cure rate to be of no clinical importance (i.e. delta = -0.10). Assuming a one-sided test size of 5% and a power of 80% how many subjects should be included in the trial? epi.ssninfb(treat = 0.85, control = 0.65, delta = -0.10, n = NA, r = 1, power = 0.80, nfractional = FALSE, alpha = 0.05)$n.total #> [1] 50 A total of 50 subjects need to be enrolled in the trial, 25 in the treatment group and 25 in the control group. ### Population sensitivity using a diagnostic test with imperfect specificity We’ll continue with the brucellosis example introduced above. Imagine the test we’re using has a diagnostic sensitivity of 0.95 (as before) but this time it has a specificity of 0.98. How many herds need to be sampled to be 95% certain that the prevalence of brucellosis in dairy herds is less than the design prevalence if less than a specified number of tests return a positive result? rsu.sssep.rsfreecalc(N = 5000, pstar = 0.05, mse.p = 0.95, msp.p = 0.95, se.u = 0.95, sp.u = 0.98, method = "hypergeometric", max.ss = 32000)$summary #> n N c pstar p1 se.p sp.p #> 1 194 5000 7 0.05 0.04898102 0.951019 0.9573939 A population sensitivity of 95% is achieved with a total sample size of 194 herds, assuming a cut-point of 7 or more positive herds are required to return a positive survey result. Note the substantial increase in sample size when diagnostic specificity is imperfect (194 herds when specificity is 0.98 compared with 63 when specificity is 1.00). The relatively low design prevalence in combination with imperfect imperfect specificity means that false positives are more likely to be a problem in this population so the number tested needs to be (substantially) increased. Increase the design prevalence to 0.10 to see its effect on estimated sample size. rsu.sssep.rsfreecalc(N = 5000, pstar = 0.10, mse.p = 0.95, msp.p = 0.95, se.u = 0.95, sp.u = 0.98, method = "hypergeometric", max.ss = 32000)$summary #> n N c pstar p1 se.p sp.p #> 1 66 5000 3 0.1 0.04992274 0.9500773 0.9566218 The required sample size decreases to 66 and the cut-point to 3 positives due to: (1) the expected reduction in the number of false positives; and (2) the greater difference between true and false positive rates in the first example compared with the second. ### One-stage cluster sampling An aid project has distributed cook stoves in a single province in a resource-poor country. At the end of three years, the donors would like to know what proportion of households are still using their donated stove. A cross-sectional study is planned where villages in a province will be sampled and all households (approximately 75 per village) will be visited to determine if the donated stove is still in use. A pilot study of the prevalence of stove usage in five villages showed that 0.46 of householders were still using their stove and the intracluster correlation coefficient (ICC) for stove use within villages is in the order of 0.20. If the donor wanted to be 95% confident that the survey estimate of stove usage was within 10% of the true population value, how many villages (clusters) need to be sampled? epi.ssclus1estb(b = 75, Py = 0.46, epsilon.r = 0.10, rho = 0.20, conf.level = 0.95)$n.psu #> [1] 96 A total of 96 villages need to be sampled to meet the requirements of the study. ### One-stage cluster sampling (continued) Continuing the example above, we are now told that the number of households per village varies. The average number of households per village is 75 with a 0.025 quartile of 40 households and a 0.975 quartile of 180. Assuming the number of households per village follows a normal distribution the expected standard deviation of the number of households per village is in the order of (180 - 40) $$\div$$ 4 = 35. How many villages need to be sampled? epi.ssclus1estb(b = c(75,35), Py = 0.46, epsilon.r = 0.10, rho = 0.20, conf.level = 0.95)$n.psu #> [1] 115 A total of 115 villages need to be sampled to meet the requirements of the study. ### Two-stage cluster sampling This example is adapted from Bennett et al. (1991). We intend to conduct a cross-sectional study to determine the prevalence of disease X in a given country. The expected prevalence of disease is thought to be around 20%. Previous studies report an intracluster correlation coefficient for this disease to be 0.02. Suppose that we want to be 95% certain that our estimate of the prevalence of disease is within 5% of the true population value and that we intend to sample 20 individuals per cluster. How many clusters should be sampled to meet the requirements of the study? # From first principles: n.crude <- epi.sssimpleestb(N = 1E+06, Py = 0.20, epsilon.r = 0.05 / 0.20, se = 1, sp = 1, nfractional = FALSE, conf.level = 0.95) n.crude #> [1] 246 # A total of 246 subjects need to be enrolled into the study. Calculate the design effect: rho <- 0.02; b <- 20 D <- rho * (b - 1) + 1; D #> [1] 1.38 # The design effect is 1.38. Our crude sample size estimate needs to be increased by a factor of 1.38. n.adj <- ceiling(n.crude * D) n.adj #> [1] 340 # After accounting for lack of independence in the data a total of 340 subjects need to be enrolled into the study. How many clusters are required? ceiling(n.adj / b) #> [1] 17 # Do all of the above using epi.ssclus2estb: epi.ssclus2estb(b = 20, Py = 0.20, epsilon.r = 0.05 / 0.20, rho = 0.02, nfractional = FALSE, conf.level = 0.95) #> Warning: The calculated number of primary sampling units (n.psu) is 17. At #> least 25 primary sampling units are recommended for two-stage cluster sampling #> designs. #>$n.psu #> [1] 17 #> #> $n.ssu #> [1] 340 #> #>$DEF #> [1] 1.38 #> #> $rho #> [1] 0.02 A total of 17 clusters need to be sampled to meet the specifications of this study. epi.ssclus2estb returns a warning message that the number of clusters is less than 25. ### Two-stage cluster sampling Continuing the brucellosis prevalence example (above) being seropositive to brucellosis is likely to cluster within herds. Otte and Gumm (1997) cite the intracluster correlation coefficient for Brucella abortus in cattle to be in the order of 0.09. Adjust your sample size estimate of 545 to account for lack of independence in the data, i.e. clustering at the herd level. Assume that b = 10 animals will be sampled per herd: n.crude <- epi.sssimpleestb(N = 1E+06, Py = 0.15, epsilon.r = 0.20, se = 1, sp = 1, nfractional = FALSE, conf.level = 0.95) n.crude #> [1] 545 rho <- 0.09; b <- 10 D <- rho * (b - 1) + 1; D #> [1] 1.81 n.adj <- ceiling(n.crude * D) n.adj #> [1] 987 # Similar to the example above, we can do all of these calculations using epi.ssclus2estb: epi.ssclus2estb(b = 10, Py = 0.15, epsilon.r = 0.20, rho = 0.09, nfractional = FALSE, conf.level = 0.95) #>$n.psu #> [1] 99 #> #> $n.ssu #> [1] 986 #> #>$DEF #> [1] 1.81 #> #> \$rho #> [1] 0.09 After accounting for clustering at the herd level we estimate that a total of (545 $$\times$$ 1.81) = 986 cattle need to be sampled to meet the requirements of the survey. If 10 cows are sampled per herd this means that a total of (987 $$\div$$ 10) = 99 herds are required. ### References Bennett, S, T Woods, W Liyanage, and D Smith. 1991. “A Simplified General Method for Cluster-Sample Surveys of Health in Developing Countries.” Journal Article. World Health Statistics Quarterly 44: 98–106. Otte, JM, and ID Gumm. 1997. “Intra-Cluster Correlation Coefficients of 20 Infections Calculated from the Results of Cluster-Sample Surveys.” Journal Article. Preventive Veterinary Medicine 31: 147–50.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48412230610847473, "perplexity": 2741.558079941899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00654.warc.gz"}
http://mathhelpforum.com/statistics/88197-normal-distribution-problem-attempted-working-inside.html
# Math Help - Normal Distribution Problem (Attempted Working Inside) 1. ## Normal Distribution Problem (Attempted Working Inside) X~N(8,25) P( ${-a}\leq X \leq{a}$)=0.4 We need to find a. I decided to use GC and standardize it to find a. P( ${(-a-8)/5}\leq {Z} \leq{(a-8)/5}$)=0.4 Since it is now standardized, P( ${(-a-8)/5}\leq {Z} \leq{(a-8)/5}$) =1-2P( ${Z}\geq{(a-8)/5})$=0.4 P( ${Z}\geq{(a-8)/5})$=0.3 P( ${Z}\leq{(a-8)/5})$=0.7 So I used Ti84 InvNorm Function to get ${(a-8)/5}$=0.5244 Which means ${a}$ is 10.622. However, the ans is 6.75. Anyone can point out my mistake? Thanks. 2. Originally Posted by qazxsw11111 X~N(8,25) P( ${-a}\leq X \leq{a}$)=0.4 We need to find a. I decided to use GC and standardize it to find a. P( ${(-a-8)/5}\leq {Z} \leq{(a-8)/5}$)=0.4 Since it is now standardized, P( ${(-a-8)/5}\leq {Z} \leq{(a-8)/5}$) =1-2P( ${Z}\geq{(a-8)/5})$=0.4 P( ${Z}\geq{(a-8)/5})$=0.3 P( ${Z}\leq{(a-8)/5})$=0.7 So I used Ti84 InvNorm Function to get ${(a-8)/5}$=0.5244 Which means ${a}$ is 10.622. However, the ans is 6.75. Anyone can point out my mistake? Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968297839164734, "perplexity": 17637.641709369273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153323.32/warc/CC-MAIN-20160205193913-00023-ip-10-236-182-209.ec2.internal.warc.gz"}
https://brilliant.org/problems/sorry/
# sorry observe the following coding and find how many times "I am very very sorry" displayed?? int k=10; for(int i=0;i<k--;i++) System.out.println("I am very very sorry"); ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7572329044342041, "perplexity": 14691.534340530212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00155.warc.gz"}
http://space-scitechjournal.org.ua/en/archive/2006/5-6/05
# On ratio between the specular and bragg components scattered by the quasi-gaussian sea surface 1Zapevalov, AS, 2Pokazeev, KV, 1Pustovoitenko, VV1Marine Hydrophysical Institute of the National Academy of Sciences of Ukraine, Sevastopol, AR Crimea, Ukraine2Lomonosov Moscow State University, Moscow, Russia Kosm. nauka tehnol. 2006, 12 ;(5-6):023-029 https://doi.org/10.15407/knit2006.05.023 Publication Language: Russian Abstract:  The ratio between the specular and Bragg components in the reflected radio signal is analysed on the basis of well-known models for scattering and measurement data of sea surface slopes. The measurements were carried out from the Black Sea oceanographic platform with the use of a 2D-slope meter. It is shown that when radiosounding the sea surface, sea surface slope distribution deviation from the Gaussian distribution appear mainly at radio-wave incidence of up to 15°. Taking into account the quasi-Gaussian character of slope distribution leads to a change in the incidence domain, in which the specular component dominates the Bragg component by 1...2". Keywords: 2D-slope meter, radiosounding, specular and bragg components References: 1. Bass F. G., Fuks I. M. Wave Scattering from Statistically Rough Surfaces, 424 p. (Nauka, Moscow, 1972) [in Russian]. 2. Galaev Yu. M., Bol'shakov A. N., Efimov V. B., et al. Certain characteristics of the sea surface radar reflection at the quasi-verticsl angles of incidence: Preprint No. 1, MGI AN USSR, 24 p. (Sevastopol', 1978) [in Russian]. 3. Zapevalov A. S. Variability of the sea surface local slopes characteristics. Prikladnaja gidromehanika, 7 (79), No. 1, 17—21 (2005) [in Russian]. 4. Zapevalov A. S., Pokazeev K. V. Monitoring the state of the sea surface by means of laser sounding. In: Physical Problems of Ecology. (Environmental Physics): 4th All-Russian Conf., No. 12, 200—212 (Moscow, 2004) [in Russian]. 5. Zapevalov A. S., Ratner Y. B. Analytic model of the probability density of slopes of the sea surface. Morskoj gidrofiz. zhurn., No. 2, 3—17 (2003) [in Russian]. 6. Zubkovich S. G. Statistical characteristics of radio signals reflected from the earth's surface, 224 p. (Sovetskoe radio, Moscow, 1968) [in Russian]. 7. Isakovich M. A. Wave scattering from a statistically rough surface. Zhurn. jeksperim. i teoret. fiziki, 23 (3), 305—314 (1952) [in Russian]. 8. Kalmykov A. I., Kurekin A. S., Lementa Iu. A., Pustovoitenko V. V. Some features of backscattering of radio waves by sea surface at small grazing angles: Preprint No. 40, IRE AN USSR, 38 p. (Kharkov, 1974) [in Russian]. 9. Kalmykov A. I., Kurekin A. S., Lementa Yu. A., et al. Peculiarities of scattering of microwave radiation by attacked sea waves. Izv. vuzov. Radiofizika, 19 (9), 1315—1321 (1976) [in Russian]. 10. Kalmykov A. I., Lementa Yu. A., Ostrovsky I. E., et al. Energy scattering characteristics of the radio waves of the VHF range of the agitated sea surface: Preprint No. 71, IRE AN USSR, 60 p. (Kharkov, 1976) [in Russian]. 11. Kalmykov A. I., Rozenberg A. D., Zeldis V. I. Backscattering of sound and radio waves by irregular ripples. Izv. vuzov. Radiofizika, 10 (6), 797—802 (1967) [in Russian]. 12. Kendall M. J., Stewart A. The theory of distributions, 587 p. (Nauka, Moscow, 1966) [in Russian]. 13. Kravtsov Yu. A., Mityagina M. I., Churyumov A. N. Non-resonant mechanism of scattering of electromagnetic waves on the sea surface: scattering on a steep sharp waves. Izv. vuzov. Radiofizika, 42 (3), 240—254 (1999) [in Russian]. 14. Kudryavtsev V. N., Malinovsky V. V. Contribution of specular reflection to radar images of the sea surface. Issledovanie Zemli iz Kosmosa, No. 3, 3—11 (2004) [in Russian]. 15. Feinberg E. L. Propagation of Radiowaves Along the Earth's Surface, 545 p. (Izd. Akad. Nauk SSSR, Moscow, 1961) [in Russian]. 16. Khristoforov G. N., Zapevalov A. S., Babii M. V. Statistical characteristics of sea-surface slope at various wind speeds. Okeanologiya, 32 (3), 452—459 (1992) [in Russian]. 17. Khristoforov G. N., Zapevalov A. S., Babii M. V. Measurements of sea surface roughness parameters in the transition from calm to wind waves. Izv. AN SSSR. Fizika atmosfery i okeana, 28 (4), 424—431 (1992) [in Russian]. 18. Bass F. G., Fuks I. M., Kalmykov A. I., et al. Very High Frequency Radiowave Scattering by a Disturbed Sea Surface, Parts I and II. IEEE Trans. Antennas and Propag., AP-16, 554—568 (1968). 19. Cox C., Munk W. Measurements of the roughness of the sea surface from photographs of the sun glitter. J. Opt. Soc. Amer., 44 (11), 838—850 (1954). 20. Hughes B. A., Grant H. L., Chappell R. W. A fast response surface-wave slope meter and measured wind-waves components. Deep-Sea Res., 24 (12), 1211 — 1223 (1977). 21. Kalmykov A. I., Pustovoytenko V. V. On polarization features of radiosignals scattered from the sea surface at small grazing angles. J. Geophys. Res., 81 (11), 1961 —1964 (1976). 22. Kudryavtsev V., Hauser D., Caudal G., et al. A semi-empirical model of the normalized cross-section of the sea surface. 1. Background model. J. Geophys. Res., 108C (3), FET2 1-24 (2003). 23. Longuet-Higgins M. S. On the skewness of sea-surface slopes. J. Phys. Oceanogr., 12, 1283— 1291 (1982). https://doi.org/10.1175/1520-0485(1982)012<1283:OTSOSS>2.0.CO;2 24. Stoffelen A., Anderson D. Scatterometer data interpretation: Derivation of the transfer function CMOD4. J. Geophys. Res., 102C (3), 5767—5780 (1997). 25. Valenzuela G. Theories for the interaction of electromagnetic and ocean waves. A Review. Boundary Layer Meteorology, 13 (1-4), 61—85 (1978). 26. Wright J. W. A new model for sea clutter. IEEE Trans. Antennas and Propag., AP-16, 217—223 (1968).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512350916862488, "perplexity": 13484.875775732731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578806528.96/warc/CC-MAIN-20190426133444-20190426155444-00185.warc.gz"}
http://openstudy.com/updates/507d7fc9e4b07c5f7c1fdf38
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous 3 years ago |8x-1|-7>-11 I know it no solution , but I don't know why! Delete Cancel Submit • This Question is Open 1. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Start by adding 7 to each side 2. anonymous • 3 years ago Best Response You've already chosen the best response. 0 $|8x-1|>-4$ 3. anonymous • 3 years ago Best Response You've already chosen the best response. 0 do you get that? 4. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Anyway the solution is:$-4<8x-1<4$ $-3<8x<5$ $\frac{ -3 }{ 8 }<x<\frac{ 1 }{ 2}$ 5. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996769428253174, "perplexity": 15053.864836350362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832942.23/warc/CC-MAIN-20160723071032-00129-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/theoretical-question-about-wing-thickness.31756/
# Theoretical question about wing thickness Discussion in 'Aircraft Design / Aerodynamics / New Technology' started by geosnooker2000, Jun 10, 2019. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Jun 10, 2019 ### geosnooker2000 #### Well-Known Member Joined: Mar 30, 2019 Messages: 69 6 Location: Somerville, TN Using the Zenair CH 640 as an example, Mr. Chris Heintz claims "I guess fat wings are one of my trademarks. The depth lets me build a strong wing that is very light, and gives excellent stall characteristics as well. Under 200 mph, the thickness doesn't contribute to significant drag, so why not use it for its other advantages?" This is a question about the effects of thinning the design of the wing. I am assuming that would result in less drag and a faster stall speed? The stall would be more abrupt? I know that going with a different airfoil design would have an effect, but for now, let's assume the same camber and generally the same airfoil design, just slimmer. 2. Jun 10, 2019 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 5,874 1,494 Location: North Carolina Mr Heintz is right, you won't gain anything at low speeds from a skinny wing. Unflapped Clmax/Cdmin is maximum around 15%. There is little loss above that. 18% is almosr as good. Using a wing of less than 15% will actually add drag as well as make rhe plane heavier. Add flaps, the optimum is even thicker than 15%. Last edited: Jun 10, 2019 3. Jun 10, 2019 ### wsimpso1 #### Well-Known Member Joined: Oct 19, 2003 Messages: 5,625 2,866 Location: Saline Michigan Sorry to be a grammar nut, but I neither get requests nor questions out of the above. Since I am wired a bit weird, I might be able to guess as the intent... At Mach numbers in our range, min drag on turbulent flow wings and on well executed laminar flow wings occurs in the 12-15% thickness range. There is no point in going thinner than 12% even on tail sections, and little penalty in going to 18%. The Boomerang, the best performing light twin ever flown, uses 17% thick wings. Thinner foils require more weight to make strength, can be higher drag at cruise, can have lower stall Cl. There are exceptions and the data will show that. As to stall behaviour of a real airplane, there are lots of things involved besides the airfoil selected. Good stall behaviour can be achieved lots of ways. Bad behaviour is available lots of ways too. More on that later... If you want to explore the effects of foil thickness, camber, and base profiles, there are a couple books your should have on your shelf: Theory of Wing Sections by Abbott and von Doenhoff GA Airfoils by Harry Riblett Neither is expensive and both have LOTS of info on whole families of foils. TOWS is test data by NACA, Riblett is done in software, but have proven out very nicely in a bunch of different airplanes. I have it on good authority that while Harry was a bit of a nutjob, his airfoils are good. Billski 4. Jun 10, 2019 ### mcrae0104 #### Armchair Mafia ConspiratorHBA Supporter Joined: Oct 27, 2009 Messages: 2,859 1,882 Location: BDU, BJC Two wings with the same camber but different thickness will produce the same amount of lift when they fly at the same angle of attack. Generally, for two wings with identical camber, the thinner one will have marginally less drag but the thicker one will stall at a lower angle of attack. Therefore the thinner wing will have the lower stall speed. (There are limits to this, such as an extremely thin wing with a leading edge radius so small that it drives the Clmax AoA down.) I would add one book to Billski's recommendations, and I think it's the best one to start with: Airfoil Selection: Understanding and Choosing Airfoils For Light Aircraft by Barnaby Wainfan. It's 60 or so pages and will give you a good background when you dive into TOWS or Riblett (both of which are excellent). sotaro likes this. 5. Jun 10, 2019 ### geosnooker2000 #### Well-Known Member Joined: Mar 30, 2019 Messages: 69 6 Location: Somerville, TN I Find this comment very interesting. I will check out the readings. Thank you everyone. 6. Jun 10, 2019 ### Riggerrob #### Well-Known Member Joined: Sep 9, 2014 Messages: 1,123 309 Location: Note that most of the after-market STOL kits (Robertson, Sportsman, Wren, etc.) primarily increase the leading edge radius on stock Cessna wings. Cessna even adapted one of those STOL kits to later production C-172s. In an extreme case, a very thin wing - with the same camber - would have such a sharp leading edge that it trips airflow earlier than a fat (18% thickness) wing. The other disadvantage of too thin a wing is that spar weight increases dramatically as they get thinner. Spar weight is inversely proportional to the square of the spar depth. Under-cambered wings are also a pain to build with their concave surfaces. The majority of homebuilt wings have flat bottom skins to simplify jigging on flat tables. mcrae0104 likes this. 7. Jun 11, 2019 ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,120 3,012 Location: Memphis, TN 12-14 ish percent thickness is pretty much the spot for all around performance. Unless you are only interested in speed or STOL, the middle is the best overall. I would pick your airfoil and then see if you can make a solid wing design in the 12-14% range. If you want to bias it one way or another, 1% at the most. Anything else, I would be picking a different airfoil at a sold doable spar depth. 8. Jun 11, 2019 ### bmcj #### Well-Known MemberHBA Supporter Joined: Apr 10, 2007 Messages: 12,557 4,583 Location: Fresno, California Remember that airfoil shape has an effect on control forces. Some of the high-lift kits for Cessna tend to make the controls feel a bit heavier. 9. Jun 11, 2019 ### mcrae0104 #### Armchair Mafia ConspiratorHBA Supporter Joined: Oct 27, 2009 Messages: 2,859 1,882 Location: BDU, BJC I didn't think that was possible. flyboy2160, BoKu, gtae07 and 3 others like this. 10. Jun 16, 2019 ### geosnooker2000 #### Well-Known Member Joined: Mar 30, 2019 Messages: 69 6 Location: Somerville, TN I was able to get a plans elevation view of the Sling TSi airfoil with dimensions. Turns out it is about 14.4%. I can't find a good elevation (section) view of the CH640 wing to compare the two, short of buying the \$475 plans. Any CH640 builders out there who would care to share the figures? 11. Jun 17, 2019 ### Heliano #### Well-Known Member Joined: Dec 24, 2015 Messages: 48 24 Location: Campinas, SP, Brazil Very interesting thread. Let me propose a somehow more pragmatic aproach: how about surveying those designs that worked out right and see the kind of airfoil that was used? One thing I've noticed is: those wings with higher aspect ratio have greater thickness, for a very good reason: thicker wings are lighter under high bending moments, and they are stiffer too, which means less risk of aeroelastic problems. However lower aspect ratio wings do not exibit such great tickness. Examples: 1 - Heron, an UAV from Israel with AR of 18: root airfoil 21% tip 15% 2 - Fokker 27, a turboprop that was popular in the 60's wing AR of 12, root 21% tip 21% 3 - Glasflugel Libelle, a german glider, AR of 23, wortmann airfoil 18% thickness. Therefore it seems that high AR wings need greater thickness for structural reasons. Lower thickness wings don't. But it seems to me that when it comes to safety the point is stall abruptness and stall propagation. For example Airfoils such the 5-digit 43015, for example, should be used with great care due to the abrupt stall. 43012 is even worse. In these cases washout and/or slotted ailerons are a must. 12. Jun 17, 2019 ### BJC #### Well-Known Member Joined: Oct 7, 2013 Messages: 8,877 5,729 Location: 97FL, Florida, USA All of the airplanes that I have flown with the Clark Y and USA 35B use washout to improve handling near the stall. Very common. Ditto the 23012 / 2301X on multiple, well known, well performing airplanes. Leo won the world aerobatic championship with a 23012 / 2301X without washout. The Ercoupe (with its limited elevator authority) uses a 43013, and I see that there are several non-USA light aircraft using the 43012. Does anyone here have first hand experience with one of them? I know that we have some Ercoupe operators here. BJC 13. Jun 17, 2019 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 5,874 1,494 Location: North Carolina If you are wondering about airfoils for a low aspect delta wing, they are a different bucket of bananas. Deltas have a huge amount of induced drag at high Cl, rendering high Cl foils somewhat pointless unless you have SR71 amounts of thrust. 14. Jun 18, 2019 ### Heliano #### Well-Known Member Joined: Dec 24, 2015 Messages: 48 24 Location: Campinas, SP, Brazil Answering your question, BJC, we had an aircraft here in my country - delivered to the Air Force for primary training in the late 60's - which had exactly the same Ercoupe profile (43013). I've flown some 10 hours on it. This aircraft did not have washout. Did not have slotted ailerons or any other way of improve stall propagation. It killed a test pilot - the aircraft went into a flat spin and could not recover. Another 5 or 6 were also lost due to spin or due to a sudden uncommended snap roll in the traffic pattern. The aircraft had a very malign stall characteristic. The aircraft was named Aerotec T-23 Uirapuru. I do not mean that all aircraft using this type of airfoil will be a killer - but the designer must exercise care. The Corby Starlet, a popular australian design, uses a 430xx airfoil and does not seem to have problems. But if you look at the attached CL vs. alpha graph , you can see the sharp drop in CL at stall. That is an indication that such airfoil is not very forgiving. #### Attached Files: • ###### 43015.jpg File size: 128.6 KB Views: 23 wsimpso1 likes this. 15. Jun 18, 2019 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 5,874 1,494 Location: North Carolina I just read the wikipedia page on the Uirapuru. The ventral fin stall fix it got is the same that fixed the Hurricane's reluctance to stop spinnning. Nothing about it's stalling behaviour. But I am deeply wary of 5 digit airfoils as they can be very nasty in certain configurations. I'll leave it to better aerodynamacists than me to work round their quirks... Edit: It appears to lack wing root fillets. Fillets radically changed the stall behaviour of the P-40 evolution of the P-36 and of early Spitfire predecessors. Last edited: Jun 18, 2019 16. Jun 19, 2019 ### BJC #### Well-Known Member Joined: Oct 7, 2013 Messages: 8,877 5,729 Location: 97FL, Florida, USA I am not familiar with the T-23. According to WiKi, they added a ventral fin to cure a spin problem, and that is logical, because unrecoverable spins in GA type aircraft usually are the result of too little yaw stability or yaw control force. I knew about the sharp stall characteristic of the 2-D airfoil. As you point out, having a 3-D wing that stalls inboard first tames actual whole-wing behavior. BJC 17. Jun 19, 2019 ### pwood66889 #### Well-Known Member Joined: Feb 10, 2007 Messages: 1,323 120 Location: Sopchoppy, Florida, USA Thank you, BJC, for the opportunity... I believe the 430xx airfoils have hysterisys; that is to say, they do not "unstall" at the same Angle of Attack (AoA) as they stall (I am not be saying that well, and probably misspelling it to boot). The Ercoupe up elevator is set to keep the wing from being held beyond critical AoA. Flow over the top of the foil would resume after elevator authority limit reached. Had to demonstrate this breaking in CFI's that were new to the coupe during the BFR. This may the reason behind the manditory placard: “This aircraft characteristically incapable of spinning.” I pointed out to Mr. Harry Riblett that the `coupe was tractable despite the 430xx series. We "agree to disagree." He avers that series is dangerous, I say I'm flying the whole plane, not just the airfoil! And I have over 300 hours PIC in type. In his book "From the Ground Up," Mr. Fred Weick said 13% thickness was the best compromise he could come up with. Hope this is useful to Mr. Heliano. BJC and Scheny like this. 18. Jun 19, 2019 ### BBerson #### Well-Known MemberHBA Supporter Joined: Dec 16, 2007 Messages: 11,407 2,092 Location: Port Townsend WA The Schweizer 2-22 I soloed had about the most benign stall with 43012 airfoil. Same for the 2-33. Wouldn't spin. From the NACA graph in post 14, it has a sharp initial drop but then regains some lift up to 30° AOA. Last edited: Jun 19, 2019 19. Jun 19, 2019 ### Scheny #### Well-Known Member Joined: Feb 26, 2019 Messages: 52 13 Location: Vienna, Austria Gliders normally fly at lower induced angles of attack due to high aspect ratio. BR, Andreas 20. Jun 19, 2019 ### pictsidhe #### Well-Known Member Joined: Jul 15, 2014 Messages: 5,874 1,494 Location: North Carolina One of NACA's few test failings is that they did not do test hysteresis on airfoils. A lot of hysteresis is bad for a docile stall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5014889240264893, "perplexity": 6276.468505460205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00303.warc.gz"}
https://simple.wikipedia.org/wiki/Invertible_matrix
# Invertible matrix In linear algebra, there are certain matrices which have the property that when they are multiplied with another matrix, the result is the identity matrix ${\displaystyle I}$ (the matrix with ones on its main diagonal and 0 everywhere). If ${\displaystyle A}$ is such a matrix, then ${\displaystyle A}$ is called invertible and its inverse is called ${\displaystyle A^{-1}}$,[1] with:[2] ${\displaystyle A\cdot A^{-1}=A^{-1}\cdot A=I}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316298961639404, "perplexity": 162.0698914269626}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00007.warc.gz"}
https://solvedlib.com/a-hospital-analyzed-the-relationship-between-the,250473
# A hospital analyzed the relationship between the distance an employee must travel between home and work... ###### Question: A hospital analyzed the relationship between the distance an employee must travel between home and work in 10s of miles) and the annual number of unauthorized work absences (in days) for several randomly selected hospital employees. The regression analysis showed dfErr 13,5x24.56, 5Syy 69,60, and Syy--38.20. What is the 95% confidence interval for B1 (with appropriate units)? a. None of the answers is correct b.(-0.1941 days/mile,-0.1170 days/mile) (-1.8717 days, -1.2391 days) O d.(-1.9412 days. - 1.1696 days) O.(-19.4116 days/mile. -11.6959 days/mile) #### Similar Solved Questions ##### PROBLEM 1.1 (Bayes Classifier): Consider classifier hased on samples with priors Pr{w} Pr{u2} = 0.5 and the likelihoods 21 0 < I <1 p(xlu1) otherwise2x0 < x < 1 otherwisep(lu2)What is the Bayes decision rule and the Bayes classification error? PROBLEM 1.1 (Bayes Classifier): Consider classifier hased on samples with priors Pr{w} Pr{u2} = 0.5 and the likelihoods 21 0 < I <1 p(xlu1) otherwise 2x 0 < x < 1 otherwise p(lu2) What is the Bayes decision rule and the Bayes classification error?... ##### "his Questlon: 2 ptsof 6 (0 complete)Construct a 95% confidence interval estimate for the population mean given the values below.X=3300 = 58n = 225The 95% confidence interval for the population mean is from (Round to two decimal places as needed. Use ascending order: )Entervouranstof the answer boxes "his Questlon: 2 pts of 6 (0 complete) Construct a 95% confidence interval estimate for the population mean given the values below. X=330 0 = 58 n = 225 The 95% confidence interval for the population mean is from (Round to two decimal places as needed. Use ascending order: ) Entervouranst of th... ##### What is the region of integration for the integral; f (r,0) rdrde 7 in Cartesian coordinates? The region is bounded by (Choose one among A to HJ:Ay = 0, * = 3, and _ x = 6 B.I = 3,t = V36 = y? andy = 6 C I = F,y=V36 -= . andz = 6 D.x = 0, y = I andy = 6 Ey = 0, y = v36= €, andx = 6 Fy = 0, y = € and & = 6 G.y = 0, * = V36 - ~yP and y = 6 H. None of the above What is the region of integration for the integral; f (r,0) rdrde 7 in Cartesian coordinates? The region is bounded by (Choose one among A to HJ: Ay = 0, * = 3, and _ x = 6 B.I = 3,t = V36 = y? andy = 6 C I = F,y=V36 -= . andz = 6 D.x = 0, y = I andy = 6 Ey = 0, y = v36= €, andx = 6 Fy = 0, y ... ##### Describe some of the cellular events that are responsible for siblings from the szme Parents nat baking alike? Be as specic zs posibk1 W 4 " 0 - I 0332 0 0 E 4zp1 ParagranhPrevious Describe some of the cellular events that are responsible for siblings from the szme Parents nat baking alike? Be as specic zs posibk 1 W 4 " 0 - I 0332 0 0 E 4zp1 Paragranh Previous... ##### Starting from rest at the top, a child slides down the water slide at a swimming... Starting from rest at the top, a child slides down the water slide at a swimming pool and enters the water at a final speed of 4.65 m/s. At what final speed would the child enter the water if the water slide were twice as high? Ignore friction and resistance from the air and the water lubricating th... Hezekiah needs $2500.00 three years form now: He should deposit the following amount at the end of each quarter @ 4% per year compounded quarterly to obtain this amount: (a)$154.98 (b) $220.35 (c)$457.98 (d) $197.12... 1 answer ##### Shady Slim/Comprehensive Gross Income Problem To my friendly student tax preparer: Hello, my name is Shady... Shady Slim/Comprehensive Gross Income Problem To my friendly student tax preparer: Hello, my name is Shady Slim. I understand you are going to help me figure out my gross income for the year…whatever that means. It’s been a busy year and I’m a busy man, so let me give you the lowd... 1 answer ##### If the [OH–] of a solution is 1.5 × 10–4 M, the [H3O+] is ... (use... If the [OH–] of a solution is 1.5 × 10–4 M, the [H3O+] is ... (use parethesis to help) A. 1.2 x 10-2 B. 1.0 x 10-7 C. 1.5 x 10-3 D. 6.3 x 10-6 M E. 6.7 x 10-11 M... 5 answers ##### Point) Analyze function g(0) = -10 + 8(sin(0))2 on the interval [0,] (a) Use interval notation to indicate where (in that interval) the graph of g is concave up:(b) Use interval notation to indicate where (in that interval) the graph of g is concave down=(c) The inflection points of g , in the interval [0, z] , are (WeBWorK has a help file about entering points). point) Analyze function g(0) = -10 + 8(sin(0))2 on the interval [0,] (a) Use interval notation to indicate where (in that interval) the graph of g is concave up: (b) Use interval notation to indicate where (in that interval) the graph of g is concave down= (c) The inflection points of g , in the in... 1 answer ##### 7-69. Predetermined Rates, Job Costing, Service Firms, Product-Line Profitability (10 7-3) A&R Quality Advisors is a... 7-69. Predetermined Rates, Job Costing, Service Firms, Product-Line Profitability (10 7-3) A&R Quality Advisors is a small consulting firm offering quality audits and advising services to small and mid-sized manufacturing firms. Quality audits entail reviewing, checking, and documenting quality ... 5 answers ##### Sin- 1 T Cos0cosk sin- 1 T Cos0 cosk... 5 answers ##### 02 100 Spring 2019 ourses CHEM102_100_SP19 Laboratory Activities Lab 9 Evaluating claims; percentage acidity of white vinegarNow; the definition of "5% acidity" for vinegar can vary from place t0 place_ While it*s not stated directly on the container; you may find that some vendors belleve the 59 is concentration by mass; not by volumeSo rather than 50 mL of the Iiter of vinegar boing pure acetic acid if it$ 590 acetic acid by volume, we would find that 50 grams of kilogram of vinogar 02 100 Spring 2019 ourses CHEM102_100_SP19 Laboratory Activities Lab 9 Evaluating claims; percentage acidity of white vinegar Now; the definition of "5% acidity" for vinegar can vary from place t0 place_ While it*s not stated directly on the container; you may find that some vendors bellev... ##### (d) There is a very steep uphill section of the routenear the halfway point. It starts at 185 m above sea level andfinishes at 554 m above sea level. Let h represent the height in mabove sea level. (i) Explain what the inequality h ≥ 185 means inthis practical context. [1](ii) Draw a number line to illustrate the range ofheights above sea level encountered on this uphill section.[1](iii) Give a double inequality, using h, to describe therange of heights above sea level encountered on this uphi (d) There is a very steep uphill section of the route near the halfway point. It starts at 185 m above sea level and finishes at 554 m above sea level. Let h represent the height in m above sea level. (i) Explain what the inequality h ≥ 185 means in this practical context. [1] (ii) Draw a numb... ##### Which reagent is required in the box for Reaction 1? A) CrO3, HCl, Pyridine B) LDA C) HBr D) NaOH E) TsCl w/ pyridine F... Which reagent is required in the box for Reaction 1? A) CrO3, HCl, Pyridine B) LDA C) HBr D) NaOH E) TsCl w/ pyridine F) PBr3 G) LiAlH4 Consider the structures of Intermediate Product Y and the final product. What would be the best reagent to place in the box for Reaction 2? A) H2O B) NaOH C) Mg D) ... ##### 31-37 Find the domain of the function_ 31-37 Find the domain of the function_...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622400760650635, "perplexity": 3905.915654065928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00098.warc.gz"}
http://math.stackexchange.com/questions/152520/lp-norms-of-fourier-transform-of-solutions-of-hyperbolic-burgers-equation-at?answertab=active
# $L^p$ norms of Fourier transform of solutions of hyperbolic Burger's equation at the time of first blow-up I am struggling to understand the behavior of the Fourier transform (in the $x$ variable) of initially smooth solutions of the hyperbolic Burger's equation in 1-D, $\partial_t u + u~ \partial_x u =0$ . I start with a smooth and rapidly decaying initial condition $u(x,t)=u_0(x)$ on $\Bbb R$ . This solution evolves in time until it breaks down. At the time of first breakdown $t=T$ I look at the Fourier transform $\hat u(k,T)$ of the solution $u(x,T)$. In particular, I am trying hard to understand how and why the $L^p$ norms of the Fourier transform $\hat u$ remain finite at the time of first blow-up for $p>1$. I think that if one uses weak (or Lorentz) norms, then this non-blow-up extends even to the weak $L^1$ norm. The only way I have been able to understand this property is via the convervation law for the $L^\infty$ norm of $u$. For the $\|u \|_{L^\infty}$ norm to be defined at the time of first blow-up, the Fourier transform needs to remain in a weak $L^1$ space. Interpolation explains the rest. My question is whether there is a way to understand the non-blow-up of the said $L^p$ norms of the Fourier transform $\hat u$ without invoking the conservation law for the $L^\infty$ norm of $u$. What I seek is some kind of direct Fourier-analytic way to see what is going on. I have reached an impasse. I will be very grateful for any insight or advice. - I am starting to wonder whether maybe there is NO known way except to use conservation laws. (The bounty just ended too.) Why I think there may be no answer: the Burger equation has a 2-parameter scale invariance symmetry. Without a conservation law, it may not be possible to "peg" suitable norms to try to avoid the worst case scenario. By worst case scenarios, I mean the usual inequalities such as Hölder's and Young's (convolution) inequalities, etc. –  Gandhi Viswanathan Jun 11 '12 at 19:01 This is a fun question! I have started playing with it but it the case of periodic data $u(x,t) = sin(x)$. In this case you can write down the explicit solution. $u(x,t) = \sum_{n=1}^\infty b_n(t) \sin(nx)$ where $b_n(t) = -2 J_n(nt)/nt$ (Bessel function of order n) From this you can compute some $L^p$ norms explicitly to get a sense of what is happening. This is not a full solution but it is as far as I got before I had to get back to work ... The above result is from G.W. Platzman, An exact integral of complete spectral equations for unsteady one- dimensional flow, Tellus, XVI (1964), pp. 422–431. - Thanks! The solution $u(.,t)$ becomes progressively more "inclined" until the first derivative $\partial u/\partial x$ becomes infinite at $x= \pm n \pi$. Beyond this time of first breakdown, there no strong solutions, because $u$ becomes multivalued. Of course, weak solutions exist... My question is whether we can understand, purely in terms of Fourier transforms, why the Fourier transform $\hat u$ remains in a weak $L^1$ space at the time of breakdown. I want to know whether there is a way of seeing this without recourse to the conservation of $L^p$ norms of the $u$. –  Gandhi Viswanathan Jun 4 '12 at 21:34 Correction: I should have written $x=(2n+1)\pi$... –  Gandhi Viswanathan Jun 4 '12 at 22:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367597699165344, "perplexity": 217.15070486877667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656887.93/warc/CC-MAIN-20150417045736-00259-ip-10-235-10-82.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/303732/a-system-of-homogeneous-linear-equations
# A system of homogeneous linear equations This is the "real-life" (but slightly more technical) version of a question I have asked recently. For a prime $p>10$, let $\mathcal L_X$, $\mathcal L_Y$, and $\mathcal L_Z$ denote the pencils of all those lines in $\mathbb F_p^2$ parallel to the lines $$X:=\{(x,0)\colon x\in\mathbb F_p \}, \ Y:=\{(0,y)\colon y\in\mathbb F_p \}, \ Z:=\{(z,z)\colon z\in\mathbb F_p \},$$ respectively; thus, $|\mathcal L_X|=|\mathcal L_Y|=|\mathcal L_Z|=p$. Write $$\chi(x,y) := \omega^x,\quad (x,y)\in\mathbb F_p^2,$$ where $\omega$ is a fixed primitive root of unity of degree $p$. Given a set $S\subseteq\mathbb F_p^2$, with every element $s\in S$ associate a formal variable $x_s$, and consider the system of homogeneous linear equations \begin{gather*} \sum_{s\in S\cap\ell} x_s = 0,\quad \ell\in\mathcal L_X\cup\mathcal L_Y, \\ \sum_{s\in S\cap\ell} \chi(s)\,x_s=0, \quad \ell \in \mathcal L_Z; \end{gather*} notice that there are $3p$ equations and $|S|$ variables. Does there exist a set $S\subseteq\mathbb F_p^2$ of size $|S|<3p$ for which this system has a solution such that the set $\{s\in S\colon x_s\ne 0\}$ meets every line in $\mathbb F_p^2$? • It looks like $S$ is not needed here -- we can assume $S=\mathbb{F}_p^2$ and look for solutions with $|\{s\in \mathbb{F}_p^2\colon x_s\ne 0\}|<3p$. – Max Alekseyev Jun 27 '18 at 13:45 • @MaxAlekseyev: right, $S$ is not critical, just a matter of notation. – Seva Jun 27 '18 at 14:08 • Removing $S$ would simplify formulation. – Max Alekseyev Jun 27 '18 at 15:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996183454990387, "perplexity": 285.842766227083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00088.warc.gz"}
https://djnetworkz.wordpress.com/category/latex-2/
## Some LaTeX Tweaks… Here are some $\LaTeX$ tweaks I used to prepare a template for one of our lab reports recently. Most of the help was received from http://tex.stackexchange.com/. ### Adding more columns to the index page than the default ‘Chapter No.- Chapter Name- Page’ \documentclass[10pt,letterpaper]{article} \usepackage{lipsum,titletoc} \titlecontents{section} [0em] {} {} {} {\titlerule*[0.5em]{.}\contentspage} \begin{document} \noindent\begin{minipage}[c]{0.7\textwidth} \renewcommand{\contentsname}{\Large \centerline{CONTENTS}} \tableofcontents \end{minipage} % \hspace{0.05\textwidth} % \begin{minipage}[c]{0.2\textwidth} \begin{tabular}{lr} \textbf{Date} & \textbf{Signature} \\[1em] \end{tabular} \end{minipage}\$ ### Putting a Border Around Each Page \begin{document} \fancypage{% \setlength{\fboxsep}{10pt}\doublebox}{} ### Preventing \chapter{} from beginning on a new page \makeatletter \patchcmd{\chapter}{\if@openright\cleardoublepage\else\clearpage\fi}{}{}{} \makeatother \begin{document} ....................... Tagged ## LaTeX trickz Here are some ‘fine-tuning’ stuff for LaTeX that I recently learned. Was in the process of making a report for college, which required me to do the following formatting with LaTeX: • Easily adjust the basic left, right, top and bottom margins. Here is what i did for each of my requirement (obviously, learned from the vast knowledge base of the INTERNET 🙂 ) \usepackage{titlesec} \titleformat{\chapter}[display] {\normalfont\huge\bfseries\centering}{\chaptertitlename\ \thechapter}{20pt}{\Huge} \renewcommand{\contentsname}{\centerline{Table of contents}} \tableofcontents Easily adjust the basic left, right, top and bottom margins. \usepackage{simplemargins} %simplemargins.sty has to be copied to your local LaTeX directory or working directory \setleftmargin{4cm} \setrightmargin{3cm} \settopmargin{3cm} \setbottommargin{3cm} ## LaTeX: Survival Skills I write this not because I’m an expert but ‘cos of my love towards this ‘LaTeX’ tool and the fact that it is very useful and efficient for students. Once you get used with this tool, creation of professional  project reports, seminar reports, abstracts, presentations etc. becomes a child’s play. So, lets start. Basic Requirements: 1) A Linux Distro ( I use openSUSE 11.3 which comes preloaded with LATEX, for those distros that don’t come preloaded with LATEX, you can always download and install it from the Latex website). 2) A basic text editor (like gedit,vim,nano etc.) 3) A small amount of patience Getting Started Create a folder “mylatex” (yeah this name can be anything). Now this will be the workplace for your Latex project. Create a text file named ‘first.tex’ (only the .tex extension is important). This is your source file. Open the first.tex file. Lets create a basic document Type the following into the file: \documentclass[12pt,a4paper,oneside]{article} %this instructs instructs latex to create an article with base font size 12pt,on an A4 paper to be printed on single side. \begin{document} This is my first \LaTeX{} document. \end{document} Now save the file and exit. Now open a terminal (Application->System->terminal in openSUSE, find it somewhere under the Applications category in other Distros). Suppose you created the ‘mylatex’ folder in your Desktop, then type the following command into your terminal: cd ~/Desktop/mylatex //this changes directory (cd) to the mylatex directory in the Desktop directory) latex first.tex //this creates a .dvi version of your document Now lets add some sections and subsections to our document. Open the first.tex file and edit it as follows:(add the following between the \begin{document} and \end{document}) \section{This is my first section} This is my first section. \subsection{This is my first subsection} This is my first subsection. \section{This is my second section} This is my second section \subsection{This is my first subsection in the second section} This is my first subsection in the second section \subsubsection{This is my first subsubsection in the first subsection of the second section} This is my first subsubsection in the first subsection of the second section All the sections, subsections, subsubsections are numbered automatically (that’ s great right? It even creates a Contents page automatically with correct numbers and page numbers, wow!, now that’c cool!) For the contents page to appear, add this line to the place where you want your Contents page to appear: \tableofcontents Now, your file should look like this: \documentclass[12pt,a4paper,oneside]{article} \begin{document} \tableofcontents \newpage            %self explained This is my first \LaTeX{} document. \section{This is my first section} This is my first section. \subsection{This is my first subsection} This is my first subsection. \section{This is my second section} This is my second section \subsection{This is my first subsection in the second section} This is my first subsection in the second section \subsubsection{This is my first subsubsection in the first subsection of the second section} This is my first subsubsection in the first subsection of the second section \end{document} Run latex first.tex two or three time and use dvipdf as earlier Now that I’ve given you a taste Latex, work with it, play with it and explore it. You can always google to get lotcha help on Latex. Next we’ll learn how to import graphics to our article. Till then, take care and enjoy the power of Freedom. Power 2 u! Tagged ## LaTeX LaTeX is a high quality typesetting system is the de facto standard for the communication and publication of scientific documents. Latex is available as free software. I was introduced to LaTeX by a friend of mine a few months ago. Since then I was working with it & learning stuff. Initially one finds it too complex. But that is  not the case after one works with it for say, an hour. It is  very easy to work with, yet powerful. It can be used to create documents with a high level of professionalism. The term document used here is a general tag which can include anything ranging from articles, books, reports, letters to presentations, and what not! Mathematical typesetting feature of latex is highly rated. In fact , Latex is derived from Tex written by Donald Knuth ( of the Art of Computer Programming fame ) and the term literate programming (writing documentation as you go ) was coined by Knuth ( spelled as “Kanooth” ) while writing the software. Modern Documentation extractors like JavaDoc,CppDoc and NDoc owes it origin to this (excerpt from a mail from Praseed Pai http://praseedp.blogspot.com/ ). Beamer is a Latex class that is  used to create presentations. When i say presentations, they are something that stand a class apart! Latex documents can be fine tuned to any level, though this needs thorough knowledge of the latex commands and syntax. There are plenty of resources available  in the WWW for a thorough self study of LaTeX. Here are some tutorials: LaTeX beamer Enjoy the power of Free Software…Power to You! Tagged
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7797539830207825, "perplexity": 3984.859110373724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818694719.89/warc/CC-MAIN-20170926032806-20170926052806-00234.warc.gz"}
http://science.sciencemag.org/content/149/3688/1111
Reports # DNA: Reaction with Chloroquine See allHide authors and affiliations Science  03 Sep 1965: Vol. 149, Issue 3688, pp. 1111-1113 DOI: 10.1126/science.149.3688.1111 ## Abstract Diflerence spectrophotometry shows that double-stranded DNA produces mnarked changes in the absorption spectrum of chloroquine; only minor changes occur with single-stranded DNA. A DNA-chloroquine complex was demonstrated to sediment in the analytical ultracentrifuge. Chloroquine strongly elevated the thermal dissociation temperature, Tm, of DNA. It is concluded that the drug forms a complex with DNA by ionic interaction and stabilizes the helix.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052520990371704, "perplexity": 16908.361056214373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00462.warc.gz"}
https://thewinnower.com/papers/fundamentals-of-relativization-ii-with-computational-analyses
# Fundamentals of Relativization II with Computational Analyses 1. 1.  Richard J Daley College 2. 2.  Malcolm X College 3. 3.  College of DuPage ### Abstract Relativization: The act of constructing a physical model which obeys the Einstein equivalence principle. This paper corrects and extends the initial results regarding black hole thermodynamics revealed in the preceding paper. In the process of correcting and extending those initial upper bounds on the intensity of Hawking radiation from a micro black hole, I will Quantitative find much more. It will be demonstrated that a Schrodinger like equation derived in the previous papers once solved in position basis with the proper boundary conditions will give a model which works on any imaginable length scale. This will be worked out analytically with accuracy precision by the aid of ## Introduction In light of reviewer comments on this paper and due to having noticed an error in section 4.2 of (2). I have taken another more mathematically rigorous and numerically precise look at my findings. I did find one error in section 4.2 of (2) and a better derivation of the results in that section. Crucially the major result does not change, a number that was an upper bound is here given an exact value. In the pursuit of the exact expression for the intensity of Hawking radiation, I have been lead by the math to a much more fundamental conclusion. A set of solutions to the equations which would govern such a black hole, in this framework, which would apply to any strongly gravitationally bound system at any length scale what so ever without any divergences or singularities. The key quantity for answering questions about small scale gravity derived from my model is the potential energy as found in equation 27 of (2) . Expressed in the momentum representation the potential can be used to set up a Schrodinger like equation which will involve an integral. After working on this equation more, the results are not so easy to interpret. One has intuition about what should be the boundary conditions at the surface of a black hole in position space but not in momentum space. To be most conservative I have decided to switch to a position space representation of the local Feynman diagram expansion which still leads to a potential approximately equal to hyperbolic cosine. Regarding the objections of the reviewers to the very concepts of the Fundamentals of Relativization. Prior to submitting the first article in this series to The Winnower I did submit to a well known scholarly open access journal. One which is associated with a well known institution/society of people who study physics. It does not matter which one. Rather mysteriously two editors said this work was “incremental”, the third felt it did not make sense. (A sign of genuine review is that at least 1/3 of experts won’t agree on a paper one way or the other.) I did not ask incremental building on what. At that point the opportunity to publish on The Winnower arose. After a careful reading of a very fine textbook (1) on String/M-Theory the very concepts of relativization are not totally new or unique. What is new is my insistence that then Einstein Equivalence principle would be promoted to a basic principle of nature in such a formulation. Which I implemented by switching the roles of Poincare-Lorentz invariance and Diffeomorphism invariance. In standard String/M-Theory Poincare symmetry is the global symmetry (1). In this model there is global diffeomorphism symmetry and local Lorentz symmetry. I have not used the language of branes and strings and world sheets etc up to this point. Just spaces and manifolds of different kinds coexisting at the same time in the same equations. This paper was initially prepared with Mathematica for the sake of completeness I have included Mathematica input and output where appropriate for publication. In addition the actual notebook file, a PDF based on it, or a CDF will also be published with it. This Mathematica notebook used to compose this work should be considered as important as the web or the PDF version on The Winnower. Every effort will be made to make this version identical to the Mathematica notebook which will have more details of the calculations than can fit nicely on a PDF. ## Analysis in a Position-like Basis. By working with the a Lorentz invariant x, $\boldsymbol{x=\gamma^{a}x_{a}}$ , instead of the invariant momentum p the nature of the equations is more easily revealed. Each value of x, $\boldsymbol{x=\gamma^{a}x_{a}}$ , represents a set of values of x,y,z and t for which x is the same. A set of isometric four dimensional surfaces. This locally Lorentz invariant x is a parameter which specifies a set of equivalent geometries. Each x defines a set of physically equivalent four dimensional surfaces. In the language used in relativization theory x is a locally Lorentz invariant parameter used to write a Schrodinger equation for the Hilbert space H over the Minkowski space. In stringy M-Theory language x is a fifth dimension in which these D4 branes may move. The important thing to remember is these functions which define vectors in Hilbert space depend on their geometry of the underlying local Minkowski space which is tangent to the curved Spacetime manifold. So this “position” with the dimension of “length” is a bit deceptive! Yet this simplifies the analysis considerably. ### The Schroedinger equation in x basis with hyperbolic cosine potential. My reasoning which will lead to the exact solutions for the position basis quantum states for a relativistically bound system. That means, bound in such a way that there is an event horizon. This would include systems such as black holes of any size. This model could also apply to the whole of the universe near the time of the big bang. The key to setting the length/energy scale via the parameter L which stands for length here and not a momentum operator. L needs to be chosen so as to encompass the magnitude of whatever system is of interest. In the case of the universe L would best be the Hubble length. In the case of “quantum” black holes L should be the Planck length. In the following it will be shown that for the solutions to a Schrodinger type equation with the potential derived in the previous paper in this series [1]. L could even be zero. This model will work and give reasonable results at any imaginable length scale! I will enter the equations in a dimensionless form by introducing the simplest combination of parameters to cancel out the dimensionality. $\grave{}$ $\_$ $\boldsymbol{\text{DSolve}\left[\left\{-\frac{\hbar^{2}\psi^{\prime\prime}(x)}{% 2M}\frac{L}{hc}+\frac{\hbar^{2}R_{0}}{2M}\frac{L}{hc}v(x)\psi(x)-\text{En}% \frac{L}{hc}\psi(x)=0\right\},\psi(x),x\right]}$ $\left\{\left\{\psi(x)\to c_{1}\text{MathieuC}\left[-\frac{8\text{En}L^{2}M}{% \hbar^{2}},-2L^{2}R_{0},\frac{ix}{2L}\right]-c_{2}\text{MathieuS}\left[-\frac{% 8\text{En}L^{2}M}{\hbar^{2}},-2L^{2}R_{0},\frac{ix}{2L}\right]\right\}\right\}$ $\boldsymbol{\text{Solve}\left[\text{MathieuCharacteristicA}\left[n+\frac{1}{2}% ,-2L^{2}R_{0}\right]\text{==}-\frac{8\text{En}L^{2}M}{\hbar^{2}},\text{En}% \right]}$ $\left\{\left\{\text{En}\to-\frac{\hbar^{2}a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}% \right)}{8L^{2}M}\right\}\right\}$ and the odd eigenvalues $\boldsymbol{\text{Solve}\left[\text{MathieuCharacteristicB}\left[n+\frac{1}{2}% ,-2L^{2}R_{0}\right]\text{==}-\frac{8\text{En}L^{2}M}{\hbar^{2}},\text{En}% \right]}$ $\left\{\left\{\text{En}\to-\frac{\hbar^{2}b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}% \right)}{8L^{2}M}\right\}\right\}$ ### Even Solutions For symmetric solutions $\psi$’[0]=0. In the periodic Fouquet-Bloch form of the solution $\psi$[L]=$e^{i\left(n+\frac{1}{2}\right)}$ where $\mu$ is the Mathieu Characteristic exponent. $\boldsymbol{\text{DSolve}\left[\left\{-\frac{\hbar^{2}\psi_{a}^{\prime\prime}(% x)}{2M}\frac{L}{hc}+\frac{\hbar^{2}R_{0}}{2M}\frac{L}{hc}v(x)\psi_{a}(x)-\left% (-\frac{\hbar^{2}a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)}{8L^{2}M}\right)% \frac{L}{hc}\psi_{a}(x)=0,\psi_{a}^{\prime}[0]=0,\psi_{a}(L)=e^{i\left(n+\frac% {1}{2}\right)}\right\},\psi_{a}(x),x\right]}$ $\left\{\left\{\psi_{a}(x)\to\frac{e^{i\left(n+\frac{1}{2}\right)}\text{% MathieuC}\left[a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{% ix}{2L}\right]}{\text{MathieuC}\left[a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right% ),-2L^{2}R_{0},\frac{i}{2}\right]}\right\}\right\}$ ### Odd Solutions The odd, antisymmetric states are a bit different. $\boldsymbol{\text{DSolve}\left[\left\{-\frac{\hbar^{2}\psi_{b}^{\prime\prime}(% x)}{2M}\frac{L}{hc}+\frac{\hbar^{2}R_{0}}{2M}\frac{L}{hc}v(x)\psi_{b}(x)-\left% (-\frac{\hbar^{2}b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)}{8L^{2}M}\right)% \frac{L}{hc}\psi_{b}(x)=0,\psi_{b}[0]=0,\psi_{b}(L)=e^{i\left(n+\frac{1}{2}% \right)}\right\},\psi_{b}(x),x\right]}$ $\left\{\left\{\psi_{b}(x)\to\frac{e^{i\left(n+\frac{1}{2}\right)}\text{% MathieuS}\left[b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{% ix}{2L}\right]}{\text{MathieuS}\left[b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right% ),-2L^{2}R_{0},\frac{i}{2}\right]}\right\}\right\}$ The full solution will be $\psi$(x)=$\lambda_{a}\frac{e^{i\left(n+\frac{1}{2}\right)}\text{MathieuC}\left[a_{n+% \frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{ix}{2L}\right]}{\text% {MathieuC}\left[a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{% i}{2}\right]}+\lambda_{b}\frac{e^{i\left(n+\frac{1}{2}\right)}\text{MathieuS}% \left[b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{ix}{2L}% \right]}{\text{MathieuS}\left[b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{% 2}R_{0},\frac{i}{2}\right]}$ The set of eigenvalues are $\text{En}=-\left(\lambda_{a}\frac{\hbar^{2}a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}% \right)}{8L^{2}M}+\lambda_{b}\frac{\hbar^{2}b_{n+\frac{1}{2}}\left(-2L^{2}R_{0% }\right)}{8L^{2}M}\right)$ For a black hole the lambda’s would have to be 1/2. This would make the state of the hole a state of maximum entropy in accordance with established theory. Thus I can write down the relativized quantum state of a black hole in this framework as a function of x which in this paper is $x=\gamma^{a}x_{a}$ . The $\psi_{\text{BH}}(x)=\frac{1}{2}\frac{e^{i\left(n+\frac{1}{2}\right)}\text{% MathieuC}\left[a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2}R_{0},\frac{% ix}{2L}\right]}{\text{MathieuC}\left[a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right% ),-2L^{2}R_{0},\frac{i}{2}\right]}+\frac{1}{2}\frac{e^{i\left(n+\frac{1}{2}% \right)}\text{MathieuS}\left[b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right),-2L^{2% }R_{0},\frac{ix}{2L}\right]}{\text{MathieuS}\left[b_{n+\frac{1}{2}}\left(-2L^{% 2}R_{0}\right),-2L^{2}R_{0},\frac{i}{2}\right]}$ $\text{Enbh}=-\left(\frac{1}{2}\frac{\hbar^{2}a_{n+\frac{1}{2}}\left(-2L^{2}R_{% 0}\right)}{8L^{2}M}+\frac{1}{2}\frac{\hbar^{2}b_{n+\frac{1}{2}}\left(-2L^{2}R_% {0}\right)}{8L^{2}M}\right)$ ### Numerical Eigenvalues Here I will compute some of the energy eigenvalues numerically for a one Planck mass black hole in the lowest energy state. First I will compute them symbolically. Then with realistic numbers for the physical constants. Let us consider the even values of n for $a_{n}.$ To find these eigenvalues I need to input several constants from particle physics and parameters from cosmology to find the values. First I will enter the Planck Mass. $\grave{}$ $\boldsymbol{n=\{1,2,3,4,100,\infty\}}$ $\{1,2,3,4,100,\infty\}$ $\boldsymbol{M=n(1.2209*10{}^{\wedge}19)*10{}^{\wedge}9}$ $\{1.2209\times 10^{28},2.4418\times 10^{28},3.6627\times 10^{28},4.8836\times 1% 0^{28},1.2209\times 10^{30},\infty\}$ Then the reduced Planck constant. $\boldsymbol{\hbar=\text{ }6.58211928*10{}^{\wedge}-16}$ 6.582119279999999$\grave{}$*${}^{\wedge}$-16 The exact speed of light will also be useful. $\boldsymbol{c=299792458}$ $299792458$ The Cavendish constant with eV/c${}^{\wedge}$2 units for mass is also required. So I need a conversion factor which I will have obtained using WolframAlpha but which will not show up in the Latex processed PDF of this paper. To express Newtons gravitational I perform the following calculations. $\boldsymbol{\text{conv}=\text{kg}(5.60959*10{}^{\wedge}35\text{eV}c{}^{\wedge}% -2){}^{\wedge}-1}$ $\grave{}$ $\boldsymbol{(6.67384*10{}^{\wedge}-11m{}^{\wedge}3\text{kg}{}^{\wedge}-1s{}^{% \wedge}-2)*\text{conv}\text{//}\text{FullSimplify}}$ $\grave{}$ For the sake of the numerical evaluations I now enter the value without the units indicated. ${}^{\wedge}$ 1.18972$\grave{}$*${}^{\wedge}$-46 The Schwarzschild radii of planck scale black holes. $\boldsymbol{L=2GM}$ $\grave{}$ Notice this is much more than the Planck length still very tiny as small as 2.9 attometers. The cosmological constant which I will take to be accurate to three significant figures. I am going to take the ground state curvature eigenvalue to be identical to lambda. $R_{0}$ =$\Lambda$=${}^{\wedge}$ [3]. ${}^{\wedge}$ 2.26536$\grave{}$*${}^{\wedge}$-52 $\boldsymbol{\text{Enbh}=-\left(\frac{1}{2}\frac{\hbar^{2}a_{n+\frac{1}{2}}% \left(-2L^{2}R_{0}\right)}{8L^{2}M}+\frac{1}{2}\frac{\hbar^{2}b_{n+\frac{1}{2}% }\left(-2L^{2}R_{0}\right)}{8L^{2}M}\right)}$ $\grave{}$ As shown above a black hole, and really any gravitational system, are ones where the higher the energy eigenvalue the more difficult it becomes to escape. Indeed for a gravitationally bound system you can get very close to freedom but a particle can never really be free. Or can it? ## Hawking radiation in the framework of Relativization In the framework of relativization the form of the problem of Hawking radiation is that of barrier penetration and tunneling. There are two ways to approach this. One is to compute the probability current density vectors using the Inner product on a relativized Hilbert space (if this were M theory this would be an inner product on the world sheet of the brane describing the black hole). Instead of doing that I will use the WKB approximation. It is a simple and straight forward calculation that one hopes Mathematica can easily automate . This transmission coefficient times the value of the energy eigenvalue divided by the area of the black hole and a unit of time, say the Planck time, will give the Luminosity of Hawking radiation for a black hole. The luminosity of a black hole due to Hawking radiation in this model will be given exactly by. $L_{BH}=\frac{\left|\frac{\hbar^{2}\left(a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}% \right)+b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)\right)}{L^{4}Mt_{p}}\right|% e^{-\frac{iLE\left(\frac{i}{2}|-\frac{16L^{2}R_{0}}{-8R_{0}L^{2}+a_{n+\frac{1}% {2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)}% \right)\sqrt{\frac{\hbar^{2}\left(4\left(1+e^{2}\right)L^{2}R_{0}-e\left(a_{n+% \frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}% \right)\right)\right)}{L^{2}M}}}{2\hbar\sqrt{\frac{e\left(a_{n+\frac{1}{2}}% \left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)\right)-4% \left(1+e^{2}\right)L^{2}R_{0}}{a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{% n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)-8L^{2}R_{0}}}}}}{64\pi}$ (1) Mathematica generates a conditional expression let us consider each condition. . Let us consider each possible “OR” $\lor$ condition. $\Re\left(\frac{a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}% \left(-2L^{2}R_{0}\right)}{L^{2}R_{0}}\right)<8$ This condition is telling us that for this integral to exist and for tunneling from the center of the black hole to the surface to occur this value has to be less than eight. Since $R_{0}$ is tiny and L is going to be large $a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)$ will be of order unity. So this number in many realistic situations will also be of order unity. The next condition is $e\Re\left(\frac{a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}% \left(-2L^{2}R_{0}\right)}{L^{2}R_{0}}\right)>4+4e^{2}$ which is telling us that when e times the above number is greater than $4+4e^{2}$ that Hawking radiation will be possible when this is true. The last condition is more mysterious. $\frac{a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2% }R_{0}\right)}{L^{2}R_{0}}\notin\mathbb{R}$ This condition is saying that when $\frac{a_{n+\frac{1}{2}}\left(-2L^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-2L^{2% }R_{0}\right)}{L^{2}R_{0}}$ is not a real number, that it is at least a complex number, tunneling is also possible from the center of the black hole to its surface. Since all the values input will be real and the Mathieu characteristic functions have real output this situation need not be considered physical. The exponential term is a complicated oscillatory function of L. L would be the Schwarzschild radius of the black hole. Haking and Bekenstein’s theory would predict a perfectly smooth variance in the Luminosity of the black hole with mass and Schwarzschild radius. This theory predicts that the black hole will radiate with a slightly and rapidly varying intensity as it looses mass. For a black hole which is in a stable or metastable state the luminosity of the black hole due to Hawking radiation simplifies to… $L_{BH}\approx\frac{1}{16}\left|\frac{\hbar^{2}\left(a_{n+\frac{1}{2}}\left(-8G% ^{2}M^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-8G^{2}M^{2}R_{0}\right)\right)}{% G^{4}M^{5}t_{p}}\right|$ (2) ## The Luminosity due to Hawking Radiation of a selection of black holes including Sagittarius A* A formal equation is fine. However, a numerical result which can be compared to observations is much better. One of the most observed black holes today is Sagittarius A*. The super massive black hole at the center of the galaxy. I will also consider a black hole with a mass of one kilogram and a black hole with a mass of the minimum it would take to create a stellar mass black hole. I will assume that black hole will stay in its lowest energy eigenstate for what ever mass it happens to be. So for all these numerical calculations n=1. M in the below will be a list starting with one kilogram. Eight solar masses and the Mass of Sagittarius A* in solar masses but converted to kilograms. This will give us the final answers in easily measurable and relateable SI units. ${}^{\wedge}$ The luminosity of the Hawking Radiation from a 1kg, 8M${}_{\odot}$, and the black hole at the center of the galaxy will be….. $L_{BH}\approx\frac{1}{16}\left|\frac{\hbar^{2}\left(a_{n+\frac{1}{2}}\left(-8G% ^{2}M^{2}R_{0}\right)+b_{n+\frac{1}{2}}\left(-8G^{2}M^{2}R_{0}\right)\right)}{% G^{4}M^{5}t_{p}}\right|$ $\grave{}$ With Stefan’s constant… $\boldsymbol{\sigma=5.670*10{}^{\wedge}-8}$ 5.67$\grave{}$*${}^{\wedge}$-8 and a standard formula the temperatures are. $\boldsymbol{T_{\text{BH}}=\frac{\sqrt[4]{L_{\text{BH}}}}{2\sqrt[4]{\pi}\sqrt{G% }\sqrt{M}\sqrt[4]{\sigma}}}$ {$\grave{}$ For a hypothetical one kilogram black hole the temperature due to hawking radiation would be 55 billion Kelvin. For a stellar mass black hole the Hawking radiation would be $1.4\times 10^{-44}$ Kelvin. For Sagittarius A* this corresponds to a temperature of$5.9\times 10^{-54}K$ . A run of the mill stellar mass black hole, and the super-massive Sagittarius A* are orders of magnitude colder than the cosmic microwave Background. Right now there should be a net inflow of radiation into any astrophysical black hole. There should be no observable Hawking radiation from a stellar mass black hole. $\begin{array}[]{cc}\end{array}$ The predictions of my model derived by very different means are very close to those of standard Hawking radiation. This model differs from the Hawking model in one other way to see how examine this plot $\begin{array}[]{cc}\end{array}$ This model predicts that a black hole with a mass of $1.2\times 10^{36}$ kg will be warmer than one of $1.0\times 10^{36}$ kg by about $0.3\times 10^{36}$ Kelvin. Put another way a black hole 14 percent of the mass of Sagittarius A* will be warmer than one 12 percent of the mass of Sagittarius A*. This result corrects, extends and replace that of section 4.2 of “The Fundamentals of Relativization”. This model predicts that for black holes of mass less than 15 petagrams Hawking’s model predicts a lower temperature. This model predicts a lower temperature than Hawking’s for higher masses than that. This model predicts no observable Hawking radiation from any known black hole of stellar mass or higher. This model could be confirmed by precise enough observations of black holes of various masses and deduction for the radiation due to any influxes of matter. ## Discussion. This paper reveals one minor and two major results of this investigation both theoretical and observational in nature.. ### Exact Solution for the Relevant Schroedinger Equation. To correct an issue with a previous paper I needed to find an exact solution to the relevant Schrodinger equation. In the process I found the solutions, boundary conditions and eigenvectors in position basis for the solution which will work at any length scale. With these solutions I can write down, at least formally, Eigenstates and Eigenvectors for any gravitationally bound system at any length scale and obtain finite results as long as the length scale is not infinite. When the length scale tends toward zero the eigenvalues and eigenvectors will tend towards equation 4. ### Correction to Section 4.2 of the Fundamentals of Relativization. In a previous paper I attempted to write down a formula for the Hawking radiation which would flow from a micro black hole given the framework of Relativization. The result was basically correct but the estimate for the magnitude of the radiation was much to high as it was at best an upper bound. With the exact solution to the relevant Schrodinger equation I have been able to calculate the correct formula for the intensity of Hawking radiation from such a black hole. ### An independent calculation of Hawking Radiation In this above I have calculated the Hawking radiation temperature of a black hole in my model and achieved a high level of agreement with Hawking’s calculation. However, while my model agrees in terms of the shape of the curve it differs enough that if we can ever observe black holes with high enough precision we may be able to distinguish between the two. The agreement of this model derived from the fundamental principles of relativization with the Hawking radiation results speaks to the consistency of this result with known accepted theoretical astrophysics. ### My Level of Confidence in These Results. I am humble enough to know I could be wrong I just can’t find a obvious reason no matter how I try. So I submit this paper looking for a non obvious reason. I reserve the right to ignore anonymous comments or those which do not relate to the contents of this paper. The previous papers have their own comment and review areas. Reviews are welcome to the extent that they are useful for finding flaws in the paper. i.e. If you find a non-refutable flaw in the math physics or logic. Bear in mind this math has been done with the computer algebra system Mathematica. It is very unlikely that a simple error in calculation has been made here. What could happen here is an error in one of my basic assumptions or in my assignment of units. To check this I made sure to have Mathematica call on Wolfram Alpha to check the units. The units are correct in my calculations so far as I can tell. Bad units in an equation would be an easy and real sign of something wrong, yet easily corrected. Any criticism which seeks to simply rudely dismiss this paper must address the numerical calculations presented herein. Comments which are more appropriately related to the previous papers will be ignored unless posted with those papers. # References • S. J. Becker K. (2007) String theory and m-theory a modern introduction. Cambridge University press. External Links: Cited by: Introduction. • [2] H. Farmer Fundamentals of relativization. The Winnower. External Links: Document Cited by: Introduction, Introduction. • 28.2 KB • 1.03 MB ### Showing 1 Reviews • Hontas Farmer 0 I don't think it proper to review ones own work so I won't officially give this paper any stars. I think a mere comment is totally proper. First of all, in the early going figures take time to display on The Winnower, anyone who wishes to see the graphs should consult the tar gzipped Mathematica CDF file, or the PDF compiled on my own.  The figures will appear fully in due time. ### My Level of Confidence in These Results, Nonsense wouldn't model nature. I am humble enough to know I could be wrong I just can’t find a obvious reason no matter how I try. So I submit this paper looking for a non obvious reason. I reserve the right to ignore anonymous comments or those which do not relate to the contents of this paper. The previous papers have their own comment and review areas. Reviews are welcome to the extent that they are useful for finding flaws in the paper. i.e. If you find a non-refutable flaw in the math physics or logic. Bear in mind this math has been done with the computer algebra system Mathematica. It is very unlikely that a simple error in calculation has been made here. Any criticism which seeks to simply rudely dismiss this paper must address the numerical calculations presented herein. Show precisely where the math is wrong. Don't just state that it's "nonsense".  Nonsense wouldn't model nature in precisely the same way as a widely accepted theory, blackhole thermodynamics, with minor deviations.  Please make reviews professional and substantive. If it were proper I would give this 4 stars for confidence.  The computer is unlikely to have made an error and the results make physical sense.  4 stars for figures, 2 for originality, and I try but maybe 2-3 for writing quality.  I would never give my own work 5 stars even unofficially.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 79, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8489181399345398, "perplexity": 423.54406228477035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00380.warc.gz"}
http://mathhelpforum.com/geometry/68060-angles-triangle.html
1. ## Angles of triangle in attachments Attached Files 2. you should be very familiar with the first equation ... $m\angle{1} + m\angle{2} + m\angle{3} = 180^{\circ}$ since $m\angle{3} + m\angle{4} = 180^{\circ}$ ( a linear pair) $m\angle{1} + m\angle{2} + m\angle{3} = m\angle{3} + m\angle{4}$ so ... $m\angle{1} + m\angle{2} = m\angle{4}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7275802493095398, "perplexity": 6455.1351944556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609061.61/warc/CC-MAIN-20170527210417-20170527230417-00241.warc.gz"}
https://www.socratease.in/content/18/structure-of-atom-2/13/stepping-up
That was simple, wasn't it! As you start filling an atom with electrons, you always start with the orbit closest to the nucleus. So, the answer is $$n=1$$. If you add 1 more electron to Hydrogen (and 1 proton and 2 neutrons to the nucleus, not shown in the animation), we get Helium. The added electron also goes to the $$n=1$$ orbit. If you now add 1 more electron to Helium (and 1 proton and 2 neutrons), you get Lithium. Remember that $$n=1$$ can have a maximum of $$2n^2 = 2 \times 1^2 = 2$$ electrons (that's why the green tick in the table). So, the 3rd electron has to go into the $$n=2$$ orbit.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7070969939231873, "perplexity": 424.98180673411423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00637.warc.gz"}
https://forum.allaboutcircuits.com/threads/digitally-controlled-voltage-with-feedback.121101/
# Digitally controlled voltage with feedback. #### coinmaster Joined Dec 24, 2015 502 I'm trying to create a digitally controlled voltage which I will end up using for a software controlled voltage divider for a high voltage circuit control around 600v. For now I am trying to get it to work on 5v. This is the general idea I want precise output voltages so I want to negate any non-linearities in the transistor with digital feedback from a voltage sense IC or something. So basically I'll tell the controller that I want x voltage on the output and if the voltage on the output reads more or less than that voltage then it will self correct. This is the code I'm using right now C: // the setup routine runs once when you press reset: void setup() { // initialize serial communication at 9600 bits per second: Serial.begin(9600); pinMode(13,OUTPUT); } // the loop routine runs over and over again forever: void loop() { digitalWrite(13, HIGH); // Reads potentiometer voltage to get a reference number used for intended output voltage // Takes analogRead value and turns it into the intended voltage reference number float ReferenceNumber = reference * (5.0 / 1023.0); // Translation from intended voltage number to analogWrite voltage float output=ReferenceNumber*(225/5.0); // output to transistor base analogWrite(9,output); // reads the output voltage for feedback control // If the output voltage is greater or less than the intended voltage then the opto-isolator bias will be adjusted. if (feedback*(255/1023.0) > output ) { output = -1 ;} if (feedback*(255/1023.0) < output ) { output = +1 ;} //Prints the output voltage Serial.println(feedback* (5.0 / 1023.0)); } I'm using a potentiometer to adjust the reference voltage and then I'm converting the reverence voltage number into analogWrite voltage which is then read by analogRead at the output and the output is adjusted based on whether the output is lower or higher than the reference number. It works but it is not stable, it fluctuates all over within a volt or more. Is there some problem in the concept of my code or something? I only started learning digital electronics and coding a couple days ago. Moderators note: changed code tags to C Last edited by a moderator: #### Alec_t Joined Sep 17, 2013 11,504 R9/C1 introduce a considerable delay between the commanded voltage output and the AnalogRead value that you want. I don't see any delay routines to allow for that? #### coinmaster Joined Dec 24, 2015 502 What do you mean? How is my RC filter delaying the output voltage? Also I changed the code and switched the supply pins on the arduino and this is the result http://imgur.com/RSRyUwz The stability is much better but the accuracy is off. Perhaps the ADC does not have a high enough resolution? This is the code I am now using C: int PWMout = 9; // Ouput PWM pin int output = 255; //Start PWM at 255 as this is zero voltage output, because the circuit is a shunt regulator. // the setup routine runs once when you press reset: void setup() { // initialize serial communication at 9600 bits per second: Serial.begin(9600); pinMode(13, OUTPUT); pinMode(A0, INPUT); pinMode(A1, INPUT); pinMode(PWMout, OUTPUT); } // the loop routine runs over and over again forever: void loop() { digitalWrite(13, HIGH); // Reads potentiometer voltage to get a reference number used for intended output voltage // Takes analogRead value and turns it into the intended voltage reference float ReferenceVoltage = reference * (5.0 / 1023.0); // reads the feedback output voltage for feedback control float FeedbackVoltage = feedback * (5.0 / 1023.0); // If the feedback voltage is greater than the reference then the PWM needs to be increased to increase shunt current. if (FeedbackVoltage > ReferenceVoltage ) { output = +1 ; } // If the feedback voltage is less than the reference then the PWM needs to be decreased to decrease shunt current. if (FeedbackVoltage < ReferenceVoltage ) { output = -1 ; } // Check output value and limit maximum and minimum if (output >= 255) { output = 255; } if (output <= 0 ) { output = 0; } // Print variables analogWrite(PWMout, output); //Prints the voltages Serial.print("Reference "); Serial.print(ReferenceVoltage); Serial.print("/tFeedbackVoltage "); Serial.print(FeedbackVoltage); Serial.print("/tPWMout "); Serial.println(output); } Last edited: #### dannyf Joined Sep 13, 2015 2,197 it fluctuates all over within a volt or more. it is very hard to stablize a mixed signal control loop. The universal trick is to lower the gain - but it comes with its bag of negativity as well. #### dannyf Joined Sep 13, 2015 2,197 You are doing "on/off" control - may not be the best approach. I would try this: 1) measure the analog output (middle of R5/R7); 2) if it is great than your desired output, increment output; else decrement output; update pwm duty cycle 3) go back to 1). You can adjust the stepping of the increments / decrements. #### coinmaster Joined Dec 24, 2015 502 You are doing "on/off" control - may not be the best approach. I would try this: 1) measure the analog output (middle of R5/R7); 2) if it is great than your desired output, increment output; else decrement output; update pwm duty cycle 3) go back to 1). You can adjust the stepping of the increments / decrements. Um, is that not what I am doing now? Check out my last post. Also I changed it to C: if (FeedbackVoltage > ReferenceVoltage ) { output++ ; } // If the feedback voltage is less than the reference then the PWM needs to be decreased to decrease shunt current. if (FeedbackVoltage < ReferenceVoltage ) { output--; } and the result is the same, whether I use +1 or ++ The error margin is pretty much eliminated after changing the supply pin and refining the code. The main issue is accuracy http://imgur.com/L3D2Ygz I'm assuming this is a resolution issue? The aruino only has 8 bits ADC. Last edited: #### Alec_t Joined Sep 17, 2013 11,504 How is my RC filter delaying the output voltage? The R9/C1 time constant is 100mS, so a voltage change at the Arduino output will not appear across C1, and hence at the 'Voltmeter' node, immediately. #### dannyf Joined Sep 13, 2015 2,197 Code: [LIST=1] [*]// If the feedback voltage is greater than the reference then the PWM needs to be increased to increase shunt current. [*] if (FeedbackVoltage > ReferenceVoltage ) [*] { [*] output = +1 ; [*] } [*] [*] // If the feedback voltage is less than the reference then the PWM needs to be decreased to decrease shunt current. [*] if (FeedbackVoltage < ReferenceVoltage ) [*] { [*] output = -1 ; [*] } [/LIST] That's what you wrote.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3505414128303528, "perplexity": 4747.210324631826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00084.warc.gz"}
https://www.ohsrep.org.au/nanotechnology_-_proposed_legislation
# Nanotechnology - proposed legislation At the end of 2009, the National Industrial Chemicals Notification and Assessment Scheme (NICNAS) initiated a stakeholder consultation process and issued a Discussion paper which proposed a regulatory package for industrial nanomaterials, both for new chemicals and for nano forms of 'existing' chemicals. (The VTHC submission to the Discussion paper can be downloaded on the right hand side of this page) Changes were introduced in terms of requirements for nanoforms of 'new' chemicals (that is, chemicals not already listed on the AICS) with introducers of these chemicals, irrespective of quantities being introduced, required to notify NICNAS. (see this page of the NICNAS website for information on the changes) However, how nano forms of existing chemicals are to be regulated is still under discussion, having been put on hold until such time as a review of NICNAS has been completed. NICNAS has a  page on Nanotechnology, with background documents and more. Last amended March 2015
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900058925151825, "perplexity": 4976.1369716759045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00540.warc.gz"}
http://meetings.aps.org/Meeting/DFD05/Event/37903
### Session NK: Chaos 11:01 AM–1:37 PM, Tuesday, November 22, 2005 Hilton Chicago Room: Joliet Chair: Thomas Solomon, Bucknell University Abstract ID: BAPS.2005.DFD.NK.8 ### Abstract: NK.00008 : Synchronization via superdiffusive mixing in an extended, advection-reaction-diffusion system 12:32 PM–12:45 PM Preview Abstract MathJax On | Off   Abstract #### Authors: Matt Paoletti Carolyn Nugent Tom Solomon (Bucknell University) We study synchronization of the Belousov-Zhabotinsky (BZ) chemical reaction in an annular chain of alternating vortices. The vortex chain can (a) oscillate, in which case chaotic advection enhances mixing between adjacent vortices, and/or (b) drift, in which case a jet region forms allowing tracers to travel rapidly around the annulus. If the chain both oscillates and drifts, the long-range transport is diffusive for drift velocity $v_d <$ oscillation velocity $v_o$ and superdiffusive for $v_d > v_o$. We map out the regimes in parameter space ($v_o$ versus $v_d$) where the BZ reaction synchronizes. We find that synchronization is much more prevalent for the regimes in which transport is superdiffusive. The results are interpreted by considering Levy flights -- tracer trajectories characterized by long jumps -- associated with superdiffusive transport as short-cuts'' connecting distant parts of the system, similar to those proposed for discrete small world'' networks. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2005.DFD.NK.8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890850841999054, "perplexity": 7722.5733157882305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909044320-00157-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.math.rutgers.edu/news-events/seminars-colloquia-calendar/icalrepeat.detail/2017/11/30/9419/-/on-the-emergence-of-dissipation-in-the-unitary-dynamics-of-a-closed-many-body-quantum
# Seminars & Colloquia Calendar Mathematical Physics Seminar ## On the emergence of dissipation in the unitary dynamics of a closed many-body quantum #### David Huse - Princeton University Location:  HILL 705 Date & time: Thursday, 30 November 2017 at 12:00PM - 1:00PM Abstract:   We study the unitary dynamics of operators in a chaotic many-body system with a locally conserved quantity like charge or energy that moves diffusively.   The results shed some light on the mechanism by which unitary quantum dynamics, which is reversible, gives rise to diffusive transport, which is a dissipative process.  We obtain our results in a random quantum circuit model that has a conservation law.  We find that a generic local operator consists of two parts: (i) a conserved part which comprises the weight of the operator on the local conserved densities, whose dynamics is described by diffusion.   This conserved part also acts as a source that steadily emits a flux of (ii) non-conserved operators that then rapidly spread and become highly nonlocal and thus effectively not “observable”.  This emission is at a rate set by the local diffusion current.   We can also follow the unitary dynamics of the non-conserved component of the operator and see it spreading at a Lieb-Robinson speed, but with diffusive corrections due to the conserved part of the operator that is “left behind” and whose weight decays as a power of time. Ref.:  Khemani, Vishwanath and Huse, arXiv:1710.09835.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178218245506287, "perplexity": 1390.5092466654517}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00185.warc.gz"}
https://physicshelpforum.com/threads/vector-based-equations.14402/
# Vector-based equations #### Phantoful Feb 2018 2 0 I have an annoying Physics question that I have been trying to work at for about 3 hours now, the class I'm taking is Calculus based but I'm not even sure if this question is necessarily calculus-related, I'm not sure how I should be approaching it. Here is a link because of the subscripts and notation not being transferrable... I've tried graphing these functions, but I'm pretty sure I'm not doing it correctly or I don't know how to analyze it, especially with an extra 't' factor. I also tried to relate all of it in terms of the Earth, and then in terms of Mars, but nothing seems to be resulting in progress. I also tried subtracting the x's and y's to get a resultant, doing a dot product by multiplying the x's and y's, and both times I just ended up at a halt.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955263495445251, "perplexity": 214.2204830623866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141430.58/warc/CC-MAIN-20200216211424-20200217001424-00158.warc.gz"}
https://dsp.stackexchange.com/questions/462/why-did-my-sine-wave-turn-into-a-square-wave-when-written-to-a-wav-file-in-octav
# Why did my sine wave turn into a square wave when written to a WAV file in Octave? I am trying to use Octave to generate a pure sine wave. The code for the same is as follows: x = 10.*sin(2*pi*(300/16000)*(0:1:400)); The sampling rate is 16000Hz, the sine wave is at 300 Hz. I write the above wave to a file using wavwrite like so: wavwrite(x, 16000, 16, "temp.wav") When I try to read it back into a variable, like so: y = wavread('temp.wav');, I get square waves upon plotting y. I have checked the sine wave and the period indicates a frequency of 300Hz. How can a pure sine wave become a square wave on simply writing and reading? Or am I going wrong somewhere? x is being clipped and that is why y looks like a square wave. When writing x to disk using wavwrite, the samples of x are stored in 16 bits Q15 fixed-point format. That means your data must be in range -1 to +1 (in principle +1 minus one lsb). Therefore, x must be normalized to be in this range before calling wavwrite in order to avoid clipping.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416756391525269, "perplexity": 1032.1490719049536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527396.78/warc/CC-MAIN-20190721225759-20190722011759-00360.warc.gz"}
http://www.ck12.org/trigonometry/Identifying-Sets-of-Pythagorean-Triples/lecture/Pythagorean-Triples/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Identifying Sets of Pythagorean Triples ( Video ) | Trigonometry | CK-12 Foundation # Identifying Sets of Pythagorean Triples % Progress Practice Identifying Sets of Pythagorean Triples Progress % Pythagorean Triples Learn what a Pythagorean Triple is with examples.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132954835891724, "perplexity": 20356.97318267313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010166.36/warc/CC-MAIN-20141125155650-00056-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.science.gov/topicpages/m/mupirocin+strengths+weaknesses.html
#### Sample records for mupirocin strengths weaknesses 1. Mupirocin MedlinePlus Mupirocin, an antibiotic, is used to treat impetigo as well as other skin infections caused by bacteria. It is not effective against fungal or viral infections.This medication is sometimes prescribed ... 2. The Hidden Strengths of Weak Theories PubMed Central Keil, Frank 2012-01-01 There has been a strong tradition of assuming that concepts, and their patterns of formation might be best understood in terms of how they are embedded in theory-like sets of beliefs. Although such views of concepts as embedded in theories have been criticized on five distinct grounds, there are reasonable responses to each of these usual objections. There is, however, a newly emerging concern that is much more challenging to address – people’s intuitive theories seem to be remarkably impoverished. In fact, they are so impoverished it is difficult to see how they could provide the necessary structure to explain differences between concepts and how they might form in development. One response to this recent challenge is to abandon all views of concept structure as being related to people’s intuitive theories and see concepts as essentially structure-free atoms. The alternative proposed here argues that our very weak theories might in fact do a great deal of work in explaining how we form concepts and are able to use them to successfully refer. PMID:25309684 3. Grip Strength Cutpoints for the Identification of Clinically Relevant Weakness PubMed Central Shardell, Michelle D.; Peters, Katherine W.; McLean, Robert R.; Dam, Thuy-Tien L.; Kenny, Anne M.; Fragala, Maren S.; Harris, Tamara B.; Kiel, Douglas P.; Guralnik, Jack M.; Ferrucci, Luigi; Kritchevsky, Stephen B.; Studenski, Stephanie A.; Vassileva, Maria T.; Cawthon, Peggy M. 2014-01-01 Background. Weakness is common and contributes to disability, but no consensus exists regarding a strength cutpoint to identify persons at high risk. This analysis, conducted as part of the Foundation for the National Institutes of Health Sarcopenia Project, sought to identify cutpoints that distinguish weakness associated with mobility impairment, defined as gait speed less than 0.8 m/s. Methods. In pooled cross-sectional data (9,897 men and 10,950 women), Classification and Regression Tree analysis was used to derive cutpoints for grip strength associated with mobility impairment. Results. In men, a grip strength of 26–32 kg was classified as “intermediate” and less than 26 kg as “weak”; 11% of men were intermediate and 5% were weak. Compared with men with normal strength, odds ratios for mobility impairment were 3.63 (95% CI: 3.01–4.38) and 7.62 (95% CI 6.13–9.49), respectively. In women, a grip strength of 16–20 kg was classified as “intermediate” and less than 16 kg as “weak”; 25% of women were intermediate and 18% were weak. Compared with women with normal strength, odds ratios for mobility impairment were 2.44 (95% CI 2.20–2.71) and 4.42 (95% CI 3.94–4.97), respectively. Weakness based on these cutpoints was associated with mobility impairment across subgroups based on age, body mass index, height, and disease status. Notably, in women, grip strength divided by body mass index provided better fit relative to grip strength alone, but fit was not sufficiently improved to merit different measures by gender and use of a more complex measure. Conclusions. Cutpoints for weakness derived from this large, diverse sample of older adults may be useful to identify populations who may benefit from interventions to improve muscle strength and function. PMID:24737558 4. School-Based Sexuality Education in Portugal: Strengths and Weaknesses ERIC Educational Resources Information Center Rocha, Ana Cristina; Leal, Cláudia; Duarte, Cidália 2016-01-01 Portugal, like many other countries, faces obstacles regarding school-based sexuality education. This paper explores Portuguese schools' approaches to implementing sexuality education at a local level, and provides a critical analysis of potential strengths and weaknesses. Documents related to sexuality education in a convenience sample of 89… 5. Patterns of Strengths and Weaknesses in Children's Knowledge about Fractions ERIC Educational Resources Information Center Hecht, Steven A.; Vagi, Kevin J. 2012-01-01 The purpose of this study was to explore individual patterns of strengths and weaknesses in children's mathematical knowledge about common fractions. Tasks that primarily measure either conceptual or procedural aspects of mathematical knowledge were assessed with the same children in their fourth- and fifth-grade years (N = 181, 56% female and 44%… 6. [Management models in clinical nutrition: weaknesses and strengths]. PubMed García de Lorenzo, A; Alvarez, J; Burgos, R; Cabrerizo, L; Farrer, K; García Almeida, J M; García Luna, P P; García Peris, P; Llano, J Del; Planas, M; Piñeiro, G 2009-01-01 At the 6th Abbott-SENPE Debate Forum a multidisciplinary and multiprofessional discussion was established in order to seek for the model or the models of clinical management most appropriate for Clinical Nutrition and Dietetics Units (CNAD) in Spain. The weaknesses and strengths as well as opportunities for the current systems were assessed concluding that a certain degree of disparity was observed not only due to regional differences but also to different hospital types. It was proposed, from SENPE, the creation of a working group helping to standardize the models and promote the culture of Integral Control and Change Management. 7. Screening for Mupirocin Resistance in Staphylococcus PubMed Central Sanju, Avr Jeya; Kopula, Sridharan Sathyamoorthy 2015-01-01 Introduction Mupirocin is widely used topical antibiotic for the treatment of skin and soft tissue infections caused by Staphylococcus and Streptococcus. In addition nasal formulations are approved for the use in nasal eradication of methicillin-resistant Staphylococcus aureus in patients and health care workers. Wide usage of mupirocin has resulted in resistance leading to treatment failure. Aim To screen for the mupirocin resistance among the Staphylococcus isolates using disc diffusion and minimum inhibitory concentration method. Materials and Methods A cross-sectional study was done at Microbiology Department of Sri Ramachandra University with 100 strains of Staphylococcus spp isolated from skin and soft tissue infections. Methicillin susceptibility was done by disc diffusion method using oxacillin (1 μgm) and cefoxitin (30 μgm) discs. Isolates were screened for mupirocin resistance by disc diffusion method using 5 μgm discs. High level and low level resistance determined by MIC using agar dilution method. Results In 100 Staphylococcus spp 56 were Staphylococcus aureus and 44 were CoNS. Among the 56 Staphylococcus aureus 49 (87.5%) were mupirocin susceptible and 7 (12.5%) resistant by 5μg disc diffusion method. However by MIC method 11 (19.6%) were high and low level mupirocin resistant. Out of 44 CoNS 22 (50%) and 18 (41%) were susceptible by disc diffusion and MIC method respectively. Of the 26 resistant CoNS low level and high level mupirocin resistant was observed in 7 (15.9%) and 19 (43.1%) respectively. Conclusion Screening for mupirocin resistance by disc diffusion method is important before attempting decolonisation. Mupirocin resistance is more with CoNS. Disc diffusion method may miss low level Mupirocin resistance. PMID:26557517 8. Big Data and Health Economics: Strengths, Weaknesses, Opportunities and Threats. PubMed Collins, Brendan 2016-02-01 'Big data' is the collective name for the increasing capacity of information systems to collect and store large volumes of data, which are often unstructured and time stamped, and to analyse these data by using regression and other statistical techniques. This is a review of the potential applications of big data and health economics, using a SWOT (strengths, weaknesses, opportunities, threats) approach. In health economics, large pseudonymized databases, such as the planned care.data programme in the UK, have the potential to increase understanding of how drugs work in the real world, taking into account adherence, co-morbidities, interactions and side effects. This 'real-world evidence' has applications in individualized medicine. More routine and larger-scale cost and outcomes data collection will make health economic analyses more disease specific and population specific but may require new skill sets. There is potential for biomonitoring and lifestyle data to inform health economic analyses and public health policy. 9. Antimicrobial potency of single and combined mupirocin and monoterpenes, thymol, menthol and 1,8-cineole against Staphylococcus aureus planktonic and biofilm growth. PubMed Kifer, Domagoj; Mužinić, Vedran; Klarić, Maja Šegvić 2016-09-01 Staphylococcus aureus is one of the most commonly isolated microbes in chronic rhinosinusitis (CRS) that can be complicated due to the formation of a staphylococcal biofilm. In this study, we investigated antimicrobial efficacy of single mupirocin and three types of monoterpenes (thymol, menthol and 1,8-cineole) as well as mupirocin-monoterpene combinations against S. aureus ATCC 29213 and 5 methicilin-resistant S. aureus strains (MRSA) grown in planktonic and biofilm form. MIC against planktonic bacteria as well as minimum biofilm-eliminating concentrations (MBECs) and minimum biofilm inhibitory concentrations (MBICs) were determined by TTC and MTT reduction assay, respectively. The MICs of mupirocin (0.125-0.156 μg ml(-1)) were three orders of magnitude lower than the MICs of monoterpenes, which were as follows: thymol (0.250-0.375 mg ml(-1)) > menthol (1 mg ml(-1)) > 1,8-cineole (4-8 mg ml(-1)). Mupirocin-monoterpene combinations showed indifferent effect as compared with MICs of single substances. Mupirocin (0.016-2 mg ml(-1)) failed to destroy the biofilm. The MBECs of thymol and menthol were two- to sixfold higher than their MICs, while 1,8-cineole exerted a weak antibiofilm effect with MBECs 16- to 64-fold higher than MICs. Mixture of mupirocin and 1,8 cineole exerted a potentiated biofilm-eliminating effect, mupirocin-menthol showed antagonism, while effect of thymol-mupirocin mixture was inconclusive. MBICs of antimicrobials were close to their MICs, except 1,8-cineole, MBIC was about three- to fivefold higher. Dominant synergy was observed for mixtures of mupirocin and menthol or thymol, whereas mupirocin-1,8-cineol exerted an indifferent or additive biofilm inhibitory effect. Particular combinations of mupirocin and the monoterpenes could be applied in CRS therapy in order to eliminate or prevent bacterial biofilm growth. PMID:26883392 10. Environmental metabolomics: a SWOT analysis (strengths, weaknesses, opportunities, and threats). PubMed Miller, Marion G 2007-02-01 Metabolomic approaches have the potential to make an exceptional contribution to understanding how chemicals and other environmental stressors can affect both human and environmental health. However, the application of metabolomics to environmental exposures, although getting underway, has not yet been extensively explored. This review will use a SWOT analysis model to discuss some of the strengths, weaknesses, opportunities, and threats that are apparent to an investigator venturing into this relatively new field. SWOT has been used extensively in business settings to uncover new outlooks and identify problems that would impede progress. The field of environmental metabolomics provides great opportunities for discovery, and this is recognized by a high level of interest in potential applications. However, understanding the biological consequence of environmental exposures can be confounded by inter- and intra-individual differences. Metabolomic profiles can yield a plethora of data, the interpretation of which is complex and still being evaluated and researched. The development of the field will depend on the availability of technologies for data handling and that permit ready access metabolomic databases. Understanding the relevance of metabolomic endpoints to organism health vs adaptation vs variation is an important step in understanding what constitutes a substantive environmental threat. Metabolomic applications in reproductive research are discussed. Overall, the development of a comprehensive mechanistic-based interpretation of metabolomic changes offers the possibility of providing information that will significantly contribute to the protection of human health and the environment. PMID:17269710 11. Environmental metabolomics: a SWOT analysis (strengths, weaknesses, opportunities, and threats). PubMed Miller, Marion G 2007-02-01 Metabolomic approaches have the potential to make an exceptional contribution to understanding how chemicals and other environmental stressors can affect both human and environmental health. However, the application of metabolomics to environmental exposures, although getting underway, has not yet been extensively explored. This review will use a SWOT analysis model to discuss some of the strengths, weaknesses, opportunities, and threats that are apparent to an investigator venturing into this relatively new field. SWOT has been used extensively in business settings to uncover new outlooks and identify problems that would impede progress. The field of environmental metabolomics provides great opportunities for discovery, and this is recognized by a high level of interest in potential applications. However, understanding the biological consequence of environmental exposures can be confounded by inter- and intra-individual differences. Metabolomic profiles can yield a plethora of data, the interpretation of which is complex and still being evaluated and researched. The development of the field will depend on the availability of technologies for data handling and that permit ready access metabolomic databases. Understanding the relevance of metabolomic endpoints to organism health vs adaptation vs variation is an important step in understanding what constitutes a substantive environmental threat. Metabolomic applications in reproductive research are discussed. Overall, the development of a comprehensive mechanistic-based interpretation of metabolomic changes offers the possibility of providing information that will significantly contribute to the protection of human health and the environment. 12. Committee Effectiveness in Higher Education: The Strengths and Weaknesses of Group Decision Making ERIC Educational Resources Information Center Bates, Stephen B. 2014-01-01 Focusing on five models of committee effectiveness for purposes of this assessment will lend insight into the strengths and weaknesses of utilizing a structured action plan as a guide to achieving and maintaining optimum committee effectiveness in higher education. In the compilation of the strengths and weaknesses of committee decision making,… 13. Strengths and Weaknesses of NESTs and NNESTs: Perceptions of NNESTs in Hong Kong ERIC Educational Resources Information Center Ma, Lai Ping Florence 2012-01-01 Since non-native English speaking teachers (NNESTs) are always compared with native English speaking teachers (NESTs) on linguistic grounds, their strengths and weaknesses as English teachers are worthy of investigation. This paper reports on a mixed methods study which examines the strengths and weaknesses of NNESTs and NESTs through the… 14. 75 FR 79295 - New Animal Drugs; Mupirocin Federal Register 2010, 2011, 2012, 2013, 2014 2010-12-20 ... HUMAN SERVICES Food and Drug Administration 21 CFR Parts 510 and 524 New Animal Drugs; Mupirocin AGENCY...) is amending the animal drug regulations to reflect approval of an abbreviated new animal drug...., has not been previously listed in the animal drug regulations as a sponsor of an approved... 15. 21 CFR 524.1465 - Mupirocin. Code of Federal Regulations, 2011 CFR 2011-04-01 ... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS OPHTHALMIC AND TOPICAL DOSAGE FORM NEW ANIMAL DRUGS § 524.1465 Mupirocin. (a... Staphylococcus aureus and S. intermedius. (3) Limitations. Federal law restricts this drug to use by or on... 16. Judges Cite Current Trends, Comment on Strengths and Weaknesses. ERIC Educational Resources Information Center Smith, Helen F. 1995-01-01 Features opinions of newspaper and yearbook judges as to the state of student publications. Cites as strengths: graphic devices, more pull-out quotes, more double-page spreads with dominant elements, and overall improvement in the quality of magazines. Cites as trends: overdoing opinion at the expense of research, and too many stories that are… 17. [Archaeology and criminology--Strengths and weaknesses of interdisciplinary cooperation]. PubMed Bachhiesl, Christian 2015-01-01 Interdisciplinary cooperation of archaeology and criminology is often focussed on the scientific methods applied in both fields of knowledge. In combination with the humanistic methods traditionally used in archaeology, the finding of facts can be enormously increased and the subsequent hermeneutic deduction of human behaviour in the past can take place on a more solid basis. Thus, interdisciplinary cooperation offers direct and indirect advantages. But it can also cause epistemological problems, if the weaknesses and limits of one method are to be corrected by applying methods used in other disciplines. This may result in the application of methods unsuitable for the problem to be investigated so that, in a way, the methodological and epistemological weaknesses of two disciplines potentiate each other. An example of this effect is the quantification of qualia. These epistemological reflections are compared with the interdisciplinary approach using the concrete case of the "Eulau Crime Scene". 18. Antimicrobial properties of Pseudomonas strains producing the antibiotic mupirocin. PubMed Matthijs, Sandra; Vander Wauven, Corinne; Cornu, Bertrand; Ye, Lumeng; Cornelis, Pierre; Thomas, Christopher M; Ongena, Marc 2014-10-01 Mupirocin is a polyketide antibiotic with broad antibacterial activity. It was isolated and characterized about 40 years ago from Pseudomonas fluorescens NCIMB 10586. To study the phylogenetic distribution of mupirocin producing strains in the genus Pseudomonas a large collection of Pseudomonas strains of worldwide origin, consisting of 117 Pseudomonas type strains and 461 strains isolated from different biological origins, was screened by PCR for the mmpD gene of the mupirocin gene cluster. Five mmpD(+) strains from different geographic and biological origin were identified. They all produced mupirocin and were strongly antagonistic against Staphylococcus aureus. Phylogenetic analysis showed that mupirocin production is limited to a single species. Inactivation of mupirocin production leads to complete loss of in vitro antagonism against S. aureus, except on certain iron-reduced media where the siderophore pyoverdine is responsible for the in vitro antagonism of a mupirocin-negative mutant. In addition to mupirocin some of the strains produced lipopeptides of the massetolide group. These lipopeptides do not play a role in the observed in vitro antagonism of the mupirocin producing strains against S. aureus. PMID:25303834 19. Objective Evaluation of Muscle Strength in Infants with Hypotonia and Muscle Weakness ERIC Educational Resources Information Center Reus, Linda; van Vlimmeren, Leo A.; Staal, J. Bart; Janssen, Anjo J. W. M.; Otten, Barto J.; Pelzer, Ben J.; Nijhuis-van der Sanden, Maria W. G. 2013-01-01 The clinical evaluation of an infant with motor delay, muscle weakness, and/or hypotonia would improve considerably if muscle strength could be measured objectively and normal reference values were available. The authors developed a method to measure muscle strength in infants and tested 81 typically developing infants, 6-36 months of age, and 17… 20. Health Education in India: A Strengths, Weaknesses, Opportunities, and Threats (SWOT) Analysis ERIC Educational Resources Information Center Sharma, Manoj 2005-01-01 The purpose of this study was to conduct a strengths, weaknesses, opportunities, and threats (SWOT) analysis of the health education profession and discipline in India. Materials from CINAHL, ERIC, MEDLINE, and Internet were collected to conduct the open coding of the SWOT analysis. Strengths of health education in India include an elaborate… 1. The strength-of-weak-ties perspective on creativity: a comprehensive examination and extension. PubMed Baer, Markus 2010-05-01 Disentangling the effects of weak ties on creativity, the present study separated, both theoretically and empirically, the effects of the size and strength of actors' idea networks and examined their joint impact while simultaneously considering the separate, moderating role of network diversity. I hypothesized that idea networks of optimal size and weak strength were more likely to boost creativity when they afforded actors access to a wide range of different social circles. In addition, I examined whether the joint effects of network size, strength, and diversity on creativity were further qualified by the openness to experience personality dimension. As expected, results indicated that actors were most creative when they maintained idea networks of optimal size, weak strength, and high diversity and when they scored high on the openness dimension. The implications of these results are discussed. 2. Internationally Adopted Children in the Early School Years: Relative Strengths and Weaknesses in Language Abilities ERIC Educational Resources Information Center Glennen, Sharon 2015-01-01 Purpose: This study aimed to determine the relative strengths and weaknesses in language and verbal short-term memory abilities of school-age children who were adopted from Eastern Europe. Method: Children adopted between 1;0 and 4;11 (years;months) of age were assessed with the Clinical Evaluation of Language Fundamentals-Preschool, Second… 3. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification ERIC Educational Resources Information Center Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla K.; Vaughn, Sharon; Tolar, Tammy D. 2014-01-01 Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Cognitive assessment… 4. Memory Profiles in Children with Mild Intellectual Disabilities: Strengths and Weaknesses ERIC Educational Resources Information Center Van der Molen, Mariet J.; Van Luit, Johannes E. H.; Jongmans, Marian J.; Van der Molen, Maurits W. 2009-01-01 Strengths and weaknesses in short-term memory (STM) and working memory (WM) were identified in children with mild intellectual disabilities (MID) by comparing their performance to typically developing children matched on chronological age (CA children) and to younger typically developing children with similar mental capacities (MA children).… 5. Identifying Profiles of Reading Strengths and Weaknesses at the Secondary Level ERIC Educational Resources Information Center Trentman, Allison M. McCarthy 2012-01-01 The purpose of this study was to evaluate the feasibility and potential utility of reading profiles to identify common patterns of reading strengths and weaknesses among students in high school with deficit reading skills. A total of 55 students from three Midwestern high schools were administered a battery of assessments that targeted specific… 6. The strength of weak connections in the macaque cortico-cortical network. PubMed Goulas, Alexandros; Schaefer, Alexander; Margulies, Daniel S 2015-09-01 Examination of the cortico-cortical network of mammals has unraveled key topological features and their role in the function of the healthy and diseased brain. Recent findings from social and biological networks pinpoint the significant role of weak connections in network coherence and mediation of information from segregated parts of the network. In the current study, inspired by such findings and proposed architectures pertaining to social networks, we examine the structure of weak connections in the macaque cortico-cortical network by employing a tract-tracing dataset. We demonstrate that the cortico-cortical connections as a whole, as well as connections between segregated communities of brain areas, comply with the architecture suggested by the so-called strength-of-weak-ties hypothesis. However, we find that the wiring of these connections is not optimal with respect to the aforementioned architecture. This configuration is not attributable to a trade-off with factors known to constrain brain wiring, i.e., wiring cost and efficiency. Lastly, weak connections, but not strong ones, appear important for network cohesion. Our findings relate a topological property to the strength of cortico-cortical connections, highlight the prominent role of weak connections in the cortico-cortical structural network and pinpoint their potential functional significance. These findings suggest that certain neuroimaging studies, despite methodological challenges, should explicitly take them into account and not treat them as negligible. PMID:25035063 7. Extraction of Weak Transition Strengths via the (He3, t) Reaction at 420MeV NASA Astrophysics Data System (ADS) Zegers, R. G. T.; Adachi, T.; Akimune, H.; Austin, Sam M.; van den Berg, A. M.; Brown, B. A.; Fujita, Y.; Fujiwara, M.; Galès, S.; Guess, C. J.; Harakeh, M. N.; Hashimoto, H.; Hatanaka, K.; Hayami, R.; Hitt, G. W.; Howard, M. E.; Itoh, M.; Kawabata, T.; Kawase, K.; Kinoshita, M.; Matsubara, M.; Nakanishi, K.; Nakayama, S.; Okumura, S.; Ohta, T.; Sakemi, Y.; Shimbara, Y.; Shimizu, Y.; Scholl, C.; Simenel, C.; Tameshige, Y.; Tamii, A.; Uchida, M.; Yamagata, T.; Yosoi, M. 2007-11-01 Differential cross sections for transitions of known weak strength were measured with the (He3, t) reaction at 420 MeV on targets of C12, C13, O18, Mg26, Ni58, Ni60, Zr90, Sn118, Sn120, and Pb208. Using these data, it is shown that the proportionalities between strengths and cross sections for this probe follow simple trends as a function of mass number. These trends can be used to confidently determine Gamow-Teller strength distributions in nuclei for which the proportionality cannot be calibrated via β-decay strengths. Although theoretical calculations in the distorted-wave Born approximation overestimate the data, they allow one to understand the main experimental features and to predict deviations from the simple trends observed in some of the transitions. 8. Extraction of weak transition strengths via the (3He, t) reaction at 420 MeV. PubMed Zegers, R G T; Adachi, T; Akimune, H; Austin, Sam M; van den Berg, A M; Brown, B A; Fujita, Y; Fujiwara, M; Galès, S; Guess, C J; Harakeh, M N; Hashimoto, H; Hatanaka, K; Hayami, R; Hitt, G W; Howard, M E; Itoh, M; Kawabata, T; Kawase, K; Kinoshita, M; Matsubara, M; Nakanishi, K; Nakayama, S; Okumura, S; Ohta, T; Sakemi, Y; Shimbara, Y; Shimizu, Y; Scholl, C; Simenel, C; Tameshige, Y; Tamii, A; Uchida, M; Yamagata, T; Yosoi, M 2007-11-16 Differential cross sections for transitions of known weak strength were measured with the (3He, t) reaction at 420 MeV on targets of 12C, 13C, 18O, 26Mg, 58Ni, 60Ni, 90Zr, 118Sn, 120Sn, and 208Pb. Using these data, it is shown that the proportionalities between strengths and cross sections for this probe follow simple trends as a function of mass number. These trends can be used to confidently determine Gamow-Teller strength distributions in nuclei for which the proportionality cannot be calibrated via beta-decay strengths. Although theoretical calculations in the distorted-wave Born approximation overestimate the data, they allow one to understand the main experimental features and to predict deviations from the simple trends observed in some of the transitions. 9. Does Weak Turbulence Impact PMSEs' Strengths Closer To The Northern Pole? NASA Astrophysics Data System (ADS) Swarnalingam, N.; Hocking, W. K.; Janches, D.; Nicolls, M. J. 2015-12-01 Existing 51.0 MHz VHF radar at Eureka (80N, 86W) in northern Canada is located closer to both the northern magnetic and geomagnetic poles. A recent calibrated study of Polar Mesosphere Summer Echoes (PMSE) using this radar supports the previous results by other radars that the absolute signal strength of PMSE in this region is relatively weak compared with the radar observations located at high latitudes. Although very cold temperature and existence of charged ice particles are the most important ingredient required for PMSE to appear, several other factors could potentially influence the absolute signal strengths of these echoes. One of them is neutral air turbulence. Previous studies indicate that upper mesospheric turbulence's strength decreases with latitudes, especially in the very high latitudes [Becker, 2004; Lubken et. al., 2009]. In this study, we investigate long-term mesospheric turbulence strengths at Eureka and study how they could be associated with the weak PMSE signal strengths compared with other high latitude conditions, where PMSE are strong. 10. Role of editors and journals in detecting and preventing scientific misconduct: strengths, weaknesses, opportunities, and threats. PubMed Marusic, Ana; Katavic, Vedran; Marusic, Matko 2007-09-01 Scientific journals have a central place in protecting research integrity because published articles are the most visible documentation of research. We used SWOT analysis to audit (S)trengths and (W)eaknesses as internal and (O)pportunities and (T)hreats as external factors affecting journals' responsibility in addressing research integrity issues. Strengths include editorial independence, authority and expertise, power to formulate editorial policies, and responsibility for the integrity of published records. Weaknesses stem from having no mandate for legal action, reluctance to get involved, and lack of training. Opportunities for editors are new technologies for detecting misconduct, policies by editorial organization or national institutions, and greater transparency of published research. Editors face threats from the lack of legal regulation and culture of research integrity in academic communities, lack of support from stakeholders in scientific publishing, and different pressures. Journal editors cannot be the policing force of the scientific community but they should actively ensure the integrity of the scientific record. 11. Strengths and weaknesses of McNamara's evolutionary psychological model of dreaming. PubMed Olliges, Sandra 2010-10-07 This article includes a brief overview of McNamara's (2004) evolutionary model of dreaming. The strengths and weaknesses of this model are then evaluated in terms of its consonance with measurable neurological and biological properties of dreaming, its fit within the tenets of evolutionary theories of dreams, and its alignment with evolutionary concepts of cooperation and spirituality. McNamara's model focuses primarily on dreaming that occurs during rapid eye movement (REM) sleep; therefore this article also focuses on REM dreaming. 12. SWOT analysis: strengths, weaknesses, opportunities and threats of the Israeli Smallpox Revaccination Program. PubMed Huerta, Michael; Balicer, Ran D; Leventhal, Alex 2003-01-01 During September 2002, Israel began its current revaccination program against smallpox, targeting previously vaccinated "first responders" among medical and emergency workers. In order to identify the potential strengths and weaknesses of this program and the conditions under which critical decisions were reached, we conducted a SWOT analysis of the current Israeli revaccination program, designed to identify its intrinsic strengths and weaknesses, as well as opportunities for its success and threats against it. SWOT analysis--a practical tool for the study of public health policy decisions and the social and political contexts in which they are reached--revealed clear and substantial strengths and weaknesses of the current smallpox revaccination program, intrinsic to the vaccine itself. A number of threats were identified that may jeopardize the success of the current program, chief among them the appearance of severe complications of vaccination. Our finding of a lack of a generation of knowledge on smallpox vaccination highlights the need for improved physician education and dissipation of misconceptions that are prevalent in the public today. PMID:12592958 13. SWOT analysis: strengths, weaknesses, opportunities and threats of the Israeli Smallpox Revaccination Program. PubMed Huerta, Michael; Balicer, Ran D; Leventhal, Alex 2003-01-01 During September 2002, Israel began its current revaccination program against smallpox, targeting previously vaccinated "first responders" among medical and emergency workers. In order to identify the potential strengths and weaknesses of this program and the conditions under which critical decisions were reached, we conducted a SWOT analysis of the current Israeli revaccination program, designed to identify its intrinsic strengths and weaknesses, as well as opportunities for its success and threats against it. SWOT analysis--a practical tool for the study of public health policy decisions and the social and political contexts in which they are reached--revealed clear and substantial strengths and weaknesses of the current smallpox revaccination program, intrinsic to the vaccine itself. A number of threats were identified that may jeopardize the success of the current program, chief among them the appearance of severe complications of vaccination. Our finding of a lack of a generation of knowledge on smallpox vaccination highlights the need for improved physician education and dissipation of misconceptions that are prevalent in the public today. 14. Structure and function of emergency care research networks: strengths, weaknesses, and challenges. PubMed Papa, Linda; Kuppermann, Nathan; Lamond, Katherine; Barsan, William G; Camargo, Carlos A; Ornato, Joseph P; Stiell, Ian G; Talan, David A 2009-10-01 The ability of emergency care research (ECR) to produce meaningful improvements in the outcomes of acutely ill or injured patients depends on the optimal configuration, infrastructure, organization, and support of emergency care research networks (ECRNs). Through the experiences of existing ECRNs, we can learn how to best accomplish this. A meeting was organized in Washington, DC, on May 28, 2008, to discuss the present state and future directions of clinical research networks as they relate to emergency care. Prior to the conference, at the time of online registration, participants responded to a series of preconference questions addressing the relevant issues that would form the basis of the breakout session discussions. During the conference, representatives from a number of existing ECRNs participated in discussions with the attendees and provided a description of their respective networks, infrastructure, and challenges. Breakout sessions provided the opportunity to further discuss the strengths and weaknesses of these networks and patterns of success with respect to their formation, management, funding, best practices, and pitfalls. Discussions centered on identifying characteristics that promote or inhibit successful networks and their interactivity, productivity, and expansion. Here the authors describe the current state of ECRNs and identify the strengths, weaknesses, and potential pitfalls of research networks. The most commonly cited strengths of population- or disease-based research networks identified in the preconference survey were access to larger numbers of patients; involvement of physician experts in the field, contributing to high-level study content; and the collaboration among investigators. The most commonly cited weaknesses were studies with too narrow a focus and restrictive inclusion criteria, a vast organizational structure with a risk of either too much or too little central organization or control, and heterogeneity of institutional 15. Strength of iron at core pressures and evidence for a weak Earth’s inner core SciTech Connect Gleason, A. E.; Mao, W. L. 2013-05-12 The strength of iron at extreme conditions is crucial information for interpreting geophysical observations of the Earth’s core and understanding how the solid inner core deforms. However, the strength of iron, on which deformation depends, is challenging to measure and accurately predict at high pressure. Here we present shear strength measurements of iron up to pressures experienced in the Earth’s core. Hydrostatic X-ray spectroscopy and non-hydrostatic radial X-ray diffraction measurements of the deviatoric strain in hexagonally close-packed iron uniquely determine its shear strength to pressures above 200 GPa at room temperature. Applying numerical modelling of the rheologic behaviour of iron under pressure, we extrapolate our experimental results to inner-core pressures and temperatures, and find that the bulk shear strength of hexagonally close-packed iron is only ~ 1 GPa at the conditions of the Earth’s centre, 364 GPa and 5,500 K. This suggests that the inner core is rheologically weak, which supports dislocation creep as the dominant creep mechanism influencing deformation. 16. A Historical Perspective on the Development of the Allan Variances and Their Strengths and Weaknesses. PubMed Allan, David W; Levine, Judah 2016-04-01 Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication. 17. Biotic indices for assessing the status of coastal waters: a review of strengths and weaknesses. PubMed Martínez-Crego, Begoña; Alcoverro, Teresa; Romero, Javier 2010-05-01 Biotic indices have become key assessment tools in most recent national and trans-national policies aimed at improving the quality of coastal waters and the integrity of their associated ecosystems. In this study we analyzed 90 published biotic indices, classified them into four types, and analyzed the strengths and weaknesses of each type in relation to the requirements of these policies. We identified three main type-specific weaknesses. First, the problems of applicability, due to practical and conceptual difficulties, which affect most indices related to ecosystem function. Second, the failure of many indices based on structural attributes of the community (e.g. taxonomic composition) to link deterioration with causative stressors, or to provide an early-detection capacity. Third, the poor relevance to the ecological integrity of indices based on attributes at the sub-individual level (e.g. multi-biomarkers). Additionally, most indices still fail on two further aspects: the broad-scale applicability and the definition of reference conditions. Nowadays, the most promising approach seems to be the aggregation of indices with complementary strengths, and obtained from different biological communities. PMID:20383392 18. Supplementary WMS-III tables for determining primary subtest strengths and weaknesses. PubMed Ryan, J J; Arb, J D; Ament, P A 2000-06-01 It is common practice to evaluate the age-adjusted subtest scores from the Wechsler intelligence scales to determine strengths and weaknesses within a profile. The Wechsler Memory Scale-III (WMS-III; D. Wechsler, 1997a) represents a significant improvement over its predecessors and, for the first time, provides age-adjusted subtest scores for interpretation, just as the Wechsler intelligence scales have done for 60 years. It is reasonable to assume that examiners will evaluate the WMS-III subtest profiles for strengths and weaknesses. However, the WMS-III Administration and Scoring Manual and the WAIS-III-WMS-III Technical Manual (The Psychological Corporation, 1997) provide no assistance for accomplishing this goal. Data from the WMS-III standardization sample, as described in the WAIS-III-WMS-III Technical Manual, were used to develop tables for determining both confidence levels and infrequency of differences between individual subtest scores and the means of 5 subtest combinations that may be clinically relevant for individual cases. 19. Communicable Diseases Surveillance System in East Azerbaijan Earthquake: Strengths and Weaknesses PubMed Central Babaie, Javad; Fatemi, Farin; Ardalan, Ali; Mohammadi, Hamed; Soroush, Mahmood 2014-01-01 Background: A Surveillance System was established for 19 diseases/syndromes in order to prevent and control communicable diseases after 2012 East Azerbaijan earthquakes. This study was conducted to investigate the strengths and weaknesses of the established SS. Methods: This study was carried out on an interview-based qualitative study using content analysis in 2012. Data was collected by semi-structured deep interviews and surveillance data. Fifteen interviews were conducted with experts and health system managers who were engaged in implementing the communicable disease surveillance system in the affected areas. The selection of participants was purposeful. Data saturation supported the sample size. The collected data was analyzed using the principles suggested by Strauss and Corbin. Results: Establishment of the disease surveillance system was rapid and inexpensive. It collected the required data fast. It also increased confidence in health authorities that the diseases would be under control in earthquake-stricken regions. Non estimated denominator for calculating the rates (incidence & prevalence), non-participation of the private sector and hospitals, rapid turnover of health staff and unfamiliarity with the definitions of the diseases were the weak points of the established disease SS. Conclusion: During the time when surveillance system was active, no significant outbreak of communicable diseases was reported. However, the surveillance system had some weaknesses. Thus, considering Iran’s susceptibility to various natural hazards, repeated exercises should be conducted in the preparedness phase to decrease the weaknesses. In addition, other types of surveillance system such as web-based or mobile-based systems should be piloted in disaster situations for future. PMID:25685619 20. The Janus-faced nature of comparative psychology--strength or weakness? PubMed Burghardt, Gordon M 2013-07-18 What is the nature of comparative psychology and how does or should it relate to evolutionary psychology? This is a time of reassessment of both fields and this article reviews the history of comparative psychology and its relationships with evolutionary psychology, ethology, and other approaches to behavior from the perspective of a former editor of the Journal of Comparative Psychology who has spent many decades engaged in research in animal behavior. Special attention is given to a reassessment of comparative psychology that was carried out in 1987. The various tensions and orientations that seem endemic to comparative psychology may, in fact, be both a strength and weakness as comparative psychology and evolutionary approaches to human psychology return to issues prominent in the late 19th Century, when both fields were just becoming established. 1. Strengths and weaknesses of Problem Based Learning from the professional perspective of registered nurses 1 PubMed Central Cónsul-Giribet, María; Medina-Moya, José Luis 2014-01-01 OBJECTIVE: to identify competency strengths and weaknesses as perceived by nursing professionals who graduated with a integrated curriculum and competency-based through Problem Based Learning in small groups. METHOD: an intrinsic case study method was used, which analyzes this innovation through former students (from the first class) with three years of professional experience. The data were collected through a questionnaire and discussion groups. RESULTS: the results show that their competency level is valued in a very satisfactory manner. This level paradoxically contrasts with the lack of theoretical knowledge they perceived at the end of their education, when they started working in clinical practice. CONCLUSIONS: the teaching strategy was key to motivate an in-depth study and arouse the desire to know. In addition, Problem Based Learning favors and reinforces the decision to learn, which is that necessary in the course of professional life. PMID:25493666 2. The voluntary community health movement in India: a strengths, weaknesses, opportunities, and threats (SWOT) analysis. PubMed Sharma, M; Bhatia, G 1996-12-01 There has been a prolific growth of voluntary organizations in India since independence in 1947. One of the major areas of this growth has been in the field of community health. The purpose of this article is to historically trace the voluntary movement in community health in India, analyze the current status, and predict future trends of voluntary efforts. A review of the literature in the form of a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis was the method of this study. Some of the key trends which emerged as the priority areas for progress and for strengthening voluntary organizations in the future were enhancing linkages between health and development; building upon collective force; greater utilization of participatory training; establishing egalitarian and effectual linkages for decision making at the international level; developing self-reliant community-based models; and the need for attaining holistic empowerment at individual, organizational, and community levels through "duty consciousness" as opposed to merely asking for rights. 3. Traits-based approaches in bioassessment and ecological risk assessment: strengths, weaknesses, opportunities and threats. PubMed Van den Brink, Paul J; Alexander, Alexa C; Desrosiers, Mélanie; Goedkoop, Willem; Goethals, Peter L M; Liess, Matthias; Dyer, Scott D 2011-04-01 We discuss the application of traits-based bioassessment approaches in retrospective bioassessment as well as in prospective ecological risk assessments in regulatory frameworks. Both approaches address the interaction between species and stressors and their consequences at different levels of biological organization, but the fact that a specific species may be less abundant in a potentially impacted site compared with a reference site is, regrettably, insufficient to provide diagnostic information. Species traits may, however, overcome the problems associated with taxonomy-based bioassessment. Trait-based approaches could provide signals regarding what environmental factors may be responsible for the impairment and, thereby, provide causal insight into the interaction between species and stressors. For development of traits-based (TBA), traits should correspond to specific types of stressors or suites of stressors. In this paper, a strengths, weaknesses, opportunities, and threats (SWOT) analysis of TBA in both applications was used to identify challenges and potentials. This paper is part of a series describing the output of the TERA (Traits-based ecological risk assessment: Realising the potential of ecoinformatics approaches in ecotoxicology) Workshop held between 7 and 11 September, 2009, in Burlington, Ontario, Canada. The recognized strengths were that traits are transferrable across geographies, add mechanistic and diagnostic knowledge, require no new sampling methodology, have an old tradition, and can supplement taxonomic analysis. Weaknesses include autocorrelation, redundancy, and inability to protect biodiversity directly. Automated image analysis, combined with genetic and biotechnology tools and improved data analysis to solve autocorrelation problems were identified as opportunities, whereas low availability of trait data, their transferability, their quantitative interpretation, the risk of developing nonrelevant traits, low quality of historic 4. The availability of attentional resources modulates the inhibitory strength related to weakly activated priming. PubMed Wang, Yongchun; Wang, Yonghui; Liu, Peng; Dai, Dongyang; Di, Meilin; Chen, Qiang 2016-08-01 The current study investigated the role of attention in inhibitory processes (the inhibitory processes described in the current study refer only to those associated with masked or flanked priming) using a mixed paradigm involving the negative compatibility effect (NCE) and object-based attention. Accumulating evidence suggests that attention can be spread more easily within the same object, which increases the availability of attentional resources, than across different objects. Accordingly, we manipulated distractor location (with primes presented in the same object versus presented in different objects) together with prime/target compatibility (compatible versus incompatible) and prime-distractor stimulus onset asynchrony (SOA, 23 ms vs 70 ms). The aim was to investigate whether inhibitory processes related to weakly activated priming, which have been previously assumed to be automatic, depend on the availability of attentional resources. The results of Experiment 1 showed a significant NCE for the 70-ms SOA when the prime and distractor were presented in the same object (greater attentional resource availability); however, reversed NCEs were obtained for all other conditions. Experiment 2 was designed to disentangle whether the results of Experiment 1 were affected by the prime position, and the results indicated that the prime position did not modulate the NCE in Experiment 1. Together, these results are consistent with the claim that the availability of attentional resources modulates the inhibitory strength related to weakly activated priming. Specifically, if attentional resources are assigned to the distractor when it is presented in the same object as the prime, the strength of the inhibition elicited by the distractor may increase and reverse the activation elicited by the prime, which could lead to a significant NCE. PMID:27198916 5. Partitioning in aqueous two-phase systems: Analysis of strengths, weaknesses, opportunities and threats. PubMed Soares, Ruben R G; Azevedo, Ana M; Van Alstine, James M; Aires-Barros, M Raquel 2015-08-01 For half a century aqueous two-phase systems (ATPSs) have been applied for the extraction and purification of biomolecules. In spite of their simplicity, selectivity, and relatively low cost they have not been significantly employed for industrial scale bioprocessing. Recently their ability to be readily scaled and interface easily in single-use, flexible biomanufacturing has led to industrial re-evaluation of ATPSs. The purpose of this review is to perform a SWOT analysis that includes a discussion of: (i) strengths of ATPS partitioning as an effective and simple platform for biomolecule purification; (ii) weaknesses of ATPS partitioning in regard to intrinsic problems and possible solutions; (iii) opportunities related to biotechnological challenges that ATPS partitioning may solve; and (iv) threats related to alternative techniques that may compete with ATPS in performance, economic benefits, scale up and reliability. This approach provides insight into the current status of ATPS as a bioprocessing technique and it can be concluded that most of the perceived weakness towards industrial implementation have now been largely overcome, thus paving the way for opportunities in fermentation feed clarification, integration in multi-stage operations and in single-step purification processes. PMID:26213222 6. Partitioning in aqueous two-phase systems: Analysis of strengths, weaknesses, opportunities and threats. PubMed Soares, Ruben R G; Azevedo, Ana M; Van Alstine, James M; Aires-Barros, M Raquel 2015-08-01 For half a century aqueous two-phase systems (ATPSs) have been applied for the extraction and purification of biomolecules. In spite of their simplicity, selectivity, and relatively low cost they have not been significantly employed for industrial scale bioprocessing. Recently their ability to be readily scaled and interface easily in single-use, flexible biomanufacturing has led to industrial re-evaluation of ATPSs. The purpose of this review is to perform a SWOT analysis that includes a discussion of: (i) strengths of ATPS partitioning as an effective and simple platform for biomolecule purification; (ii) weaknesses of ATPS partitioning in regard to intrinsic problems and possible solutions; (iii) opportunities related to biotechnological challenges that ATPS partitioning may solve; and (iv) threats related to alternative techniques that may compete with ATPS in performance, economic benefits, scale up and reliability. This approach provides insight into the current status of ATPS as a bioprocessing technique and it can be concluded that most of the perceived weakness towards industrial implementation have now been largely overcome, thus paving the way for opportunities in fermentation feed clarification, integration in multi-stage operations and in single-step purification processes. 7. Strengths and weaknesses of in-tube solid-phase microextraction: A scoping review. PubMed Fernández-Amado, M; Prieto-Blanco, M C; López-Mahía, P; Muniategui-Lorenzo, S; Prada-Rodríguez, D 2016-02-01 In-tube solid-phase microextraction (in-tube SPME or IT-SPME) is a sample preparation technique which has demonstrated over time its ability to couple with liquid chromatography (LC), as well as its advantages as a miniaturized technique. However, the in-tube SPME perspectives in the forthcoming years depend on solutions that can be brought to the environmental, industrial, food and biomedical analysis. The purpose of this scoping review is to examine the strengths and weaknesses of this technique during the period 2009 to 2015 in order to identify research gaps that should be addressed in the future, as well as the tendencies that are meant to strengthen the technique. In terms of methodological aspects, this scoping review shows the in-tube SPME strengths in the coupling with LC (LC-mass spectrometry, capillary LC, ultra-high-pressure LC), in the new performances (magnetic IT-SPME and electrochemically controlled in-tube SPME) and in the wide range of development of coatings and capillaries. Concerning the applicability, most in-tube SPME studies (around 80%) carry out environmental and biomedical analyses, a lower number food analyses and few industrial analyses. Some promising studies in proteomics have been performed. The review makes a critical description of parameters used in the optimization of in-tube SPME methods, highlighting the importance of some of them (i.e. type of capillary coatings). Commercial capillaries in environmental analysis and laboratory-prepared capillaries in biomedical analysis have been employed with good results. The most consolidated configuration is in-valve mode, however the cycle mode configuration is frequently chosen for biomedical analysis. This scoping review revealed that some aspects such as the combination of in-tube SPME with other sample treatment techniques for the analysis of solid samples should be developed in depth in the near future. 8. The global health concept of the German government: strengths, weaknesses, and opportunities. PubMed Bozorgmehr, Kayvan; Bruchhausen, Walter; Hein, Wolfgang; Knipper, Michael; Korte, Rolf; Razum, Oliver; Tinnemann, Peter 2014-01-01 Recognising global health as a rapidly emerging policy field, the German federal government recently released a national concept note for global health politics (July 10, 2013). As the German government could have a significant impact on health globally by making a coherent, evidence-informed, and long-term commitment in this field, we offer an initial appraisal of the strengths, weaknesses, and opportunities for development recognised in this document. We conclude that the national concept is an important first step towards the implementation of a coherent global health policy. However, important gaps were identified in the areas of intellectual property rights and access to medicines. In addition, global health determinants such as trade, economic crises, and liberalisation as well as European Union issues such as the health of migrants, refugees, and asylum seekers are not adequately addressed. Furthermore, little information is provided about the establishment of instruments to ensure an effective inter-ministerial cooperation. Finally, because implementation aspects for the national concept are critical for the success of this initiative, we call upon the newly elected 2013 German government to formulate a global health strategy, which includes a concrete plan of action, a time scale, and measurable goals. 9. The global health concept of the German government: strengths, weaknesses, and opportunities. PubMed Bozorgmehr, Kayvan; Bruchhausen, Walter; Hein, Wolfgang; Knipper, Michael; Korte, Rolf; Razum, Oliver; Tinnemann, Peter 2014-01-01 Recognising global health as a rapidly emerging policy field, the German federal government recently released a national concept note for global health politics (July 10, 2013). As the German government could have a significant impact on health globally by making a coherent, evidence-informed, and long-term commitment in this field, we offer an initial appraisal of the strengths, weaknesses, and opportunities for development recognised in this document. We conclude that the national concept is an important first step towards the implementation of a coherent global health policy. However, important gaps were identified in the areas of intellectual property rights and access to medicines. In addition, global health determinants such as trade, economic crises, and liberalisation as well as European Union issues such as the health of migrants, refugees, and asylum seekers are not adequately addressed. Furthermore, little information is provided about the establishment of instruments to ensure an effective inter-ministerial cooperation. Finally, because implementation aspects for the national concept are critical for the success of this initiative, we call upon the newly elected 2013 German government to formulate a global health strategy, which includes a concrete plan of action, a time scale, and measurable goals. PMID:24560258 10. Analysis of the strengths and weaknesses of acid rain electronic data reports SciTech Connect Schott, J. 1997-12-31 Entergy Corporation is a Phase II utility with a fossil generation base composed primarily of natural gas and low sulfur coal. This paper presents an analysis of a large Phase II utilitys continuous emissions monitoring data reported to EPA under Title IV Acid Rain. Electric utilities currently report hourly emissions of NO{sub x}, SO{sub 2}, CO{sub 2}, fuel use, and generation through electronic data reports to EPA. This paper describes strengths and weaknesses of the data reported to EPA as determined through an analysis of 1995 data. Emissions reported by this company under acid rain for SO{sub 2} and NO{sub x} are very different from emissions reported to state agencies for annual emission inventory purposes in past years and will represent a significant break with historic trends. A comparison of emissions has been made of 1995 emissions reported under Electronic Data Reports to the emissions that would have been reported using emission factors and fuel data in past years. In addition, the paper examines the impacts of 40 CFR Part 75 Acid Rain requirements such as missing data substitution and monitor bias adjustments. Measurement system errors including stack flow measurement and false NO{sub x}Lb/MMBtu readings at very low loads are discussed. This paper describes the implications for public policy, compliance, emissions inventories, and business decisions of Part 75 acid rain monitoring and reporting requirements. 11. Patterns of Cognitive Strengths and Weaknesses: Identification Rates, Agreement, and Validity for Learning Disabilities Identification PubMed Central Miciak, Jeremy; Fletcher, Jack M.; Stuebing, Karla; Vaughn, Sharon; Tolar, Tammy D. 2014-01-01 Purpose Few empirical investigations have evaluated LD identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability and validity of two proposed PSW methods: the concordance/discordance method (C/DM) and cross battery assessment (XBA) method. Methods Cognitive assessment data for 139 adolescents demonstrating inadequate response to intervention was utilized to empirically classify participants as meeting or not meeting PSW LD identification criteria using the two approaches, permitting an analysis of: (1) LD identification rates; (2) agreement between methods; and (3) external validity. Results LD identification rates varied between the two methods depending upon the cut point for low achievement, with low agreement for LD identification decisions. Comparisons of groups that met and did not meet LD identification criteria on external academic variables were largely null, raising questions of external validity. Conclusions This study found low agreement and little evidence of validity for LD identification decisions based on PSW methods. An alternative may be to use multiple measures of academic achievement to guide intervention. PMID:24274155 12. The voluntary community health movement in India: a strengths, weaknesses, opportunities, and threats (SWOT) analysis. PubMed Sharma, M; Bhatia, G 1996-12-01 There has been a prolific growth of voluntary organizations in India since independence in 1947. One of the major areas of this growth has been in the field of community health. The purpose of this article is to historically trace the voluntary movement in community health in India, analyze the current status, and predict future trends of voluntary efforts. A review of the literature in the form of a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis was the method of this study. Some of the key trends which emerged as the priority areas for progress and for strengthening voluntary organizations in the future were enhancing linkages between health and development; building upon collective force; greater utilization of participatory training; establishing egalitarian and effectual linkages for decision making at the international level; developing self-reliant community-based models; and the need for attaining holistic empowerment at individual, organizational, and community levels through "duty consciousness" as opposed to merely asking for rights. PMID:8912121 13. The global health concept of the German government: strengths, weaknesses, and opportunities PubMed Central Bozorgmehr, Kayvan; Bruchhausen, Walter; Hein, Wolfgang; Knipper, Michael; Korte, Rolf; Razum, Oliver; Tinnemann, Peter 2014-01-01 Recognising global health as a rapidly emerging policy field, the German federal government recently released a national concept note for global health politics (July 10, 2013). As the German government could have a significant impact on health globally by making a coherent, evidence-informed, and long-term commitment in this field, we offer an initial appraisal of the strengths, weaknesses, and opportunities for development recognised in this document. We conclude that the national concept is an important first step towards the implementation of a coherent global health policy. However, important gaps were identified in the areas of intellectual property rights and access to medicines. In addition, global health determinants such as trade, economic crises, and liberalisation as well as European Union issues such as the health of migrants, refugees, and asylum seekers are not adequately addressed. Furthermore, little information is provided about the establishment of instruments to ensure an effective inter-ministerial cooperation. Finally, because implementation aspects for the national concept are critical for the success of this initiative, we call upon the newly elected 2013 German government to formulate a global health strategy, which includes a concrete plan of action, a time scale, and measurable goals. PMID:24560258 14. The Prevalence, Genotype and Antimicrobial Susceptibility of High- and Low-Level Mupirocin Resistant Methicillin-Resistant Staphylococcus aureus PubMed Central Park, Se Young; Kim, Shin Moo 2012-01-01 Background Mupirocin has been used for the treatment of skin infections and eradication of nasal carriage of methicillin-resistant Staphylococcus aureus (MRSA). The increased use of this antibiotic has been accompanied by outbreaks of MRSA that are resistant to mupirocin. Objective This study aims to determine the prevalence, genotype and antimicrobial susceptibility of mupirocin-resistant MRSA from 4 Korean hospitals. Methods A total 193 MRSA clinical isolates were collected from four university hospitals. Antimicrobial susceptibility tests, including mupirocin, and pulsed-field gel electrophoresis (PFGE) pattern analysis were performed. Results Overall, 27 of the 193 (14.1%) MRSA isolates were resistant to mupirocin. All of the (A) hospital isolates showed high-level (HL) mupirocin resistance and the low-level (LL) mupirocin resistant strains were from three other hospitals. The PFGE patterns of 16 mupirocin-resistant isolates were divided into 5 clusters (1-5), and the nine HL mupirocin-resistant isolates belonged to cluster 1. Both the HL and LL mupirocin-resistant MRSA isolates were susceptible to vancomycin and rifampin, but they were resistant to ciprofloxacin, clindamycin and tetracycline. The erythromycin and fusidic acid resistance rates were different between the HL and LL resistant isolates. Conclusion HL mupirocin-resistant isolates that could transfer this resistance to other bacteria were detected and these isolates were clonally related. The emergence of mupirocin resistant isolates emphasizes the importance of using antibiotics judiciously and carefully monitoring the prevalence of mupirocin resistance. PMID:22363153 15. Neomycin Sulfate Improves the Antimicrobial Activity of Mupirocin-Based Antibacterial Ointments PubMed Central Blanchard, Catlyn; Brooks, Lauren; Beckley, Andrew; Colquhoun, Jennifer; Dewhurst, Stephen 2015-01-01 In the midst of the current antimicrobial pipeline void, alternative approaches are needed to reduce the incidence of infection and decrease reliance on last-resort antibiotics for the therapeutic intervention of bacterial pathogens. In that regard, mupirocin ointment-based decolonization and wound maintenance practices have proven effective in reducing Staphylococcus aureus transmission and mitigating invasive disease. However, the emergence of mupirocin-resistant strains has compromised the agent's efficacy, necessitating new strategies for the prevention of staphylococcal infections. Herein, we set out to improve the performance of mupirocin-based ointments. A screen of a Food and Drug Administration (FDA)-approved drug library revealed that the antibiotic neomycin sulfate potentiates the antimicrobial activity of mupirocin, whereas other library antibiotics did not. Preliminary mechanism of action studies indicate that neomycin's potentiating activity may be mediated by inhibition of the organism's RNase P function, an enzyme that is believed to participate in the tRNA processing pathway immediately upstream of the primary target of mupirocin. The improved antimicrobial activity of neomycin and mupirocin was maintained in ointment formulations and reduced S. aureus bacterial burden in murine models of nasal colonization and wound site infections. Combination therapy improved upon the effects of either agent alone and was effective in the treatment of contemporary methicillin-susceptible, methicillin-resistant, and high-level mupirocin-resistant S. aureus strains. From these perspectives, combination mupirocin-and-neomycin ointments appear to be superior to that of mupirocin alone and warrant further development. PMID:26596945 16. Mupirocin resistance: clinical implications and potential alternatives for the eradication of MRSA. PubMed Poovelikunnel, T; Gethin, G; Humphreys, H 2015-10-01 Mupirocin 2% ointment is used either alone or with skin antiseptics as part of a comprehensive MRSA decolonization strategy. Increased mupirocin use predisposes to mupirocin resistance, which is significantly associated with persistent MRSA carriage. Mupirocin resistance as high as 81% has been reported. There is a strong association between previous mupirocin exposure and both low-level and high-level mupirocin resistance. High-level mupirocin resistance (mupA carriage) is also linked to MDR. Among MRSA isolates, the presence of the qacA and/or qacB gene, encoding resistance to chlorhexidine, ranges from 65% to 91%, which, along with mupirocin resistance, is associated with failed decolonization. This is of significant concern for patient care and infection prevention and control strategies as both these agents are used concurrently for decolonization. Increasing bacterial resistance necessitates the discovery or development of new antimicrobial therapies. These include, for example, polyhexanide, lysostaphin, ethanol, omiganan pentahydrochloride, tea tree oil, probiotics, bacteriophages and honey. However, few of these have been evaluated fully or extensively tested in clinical trials and this is required to in part address the implications of mupirocin resistance. PMID:26142407 17. Validation of the Chinese Strengths and Weaknesses of ADHD-Symptoms and Normal-Behaviors Questionnaire in Hong Kong ERIC Educational Resources Information Center Lai, Kelly Y. C.; Leung, Patrick W. L.; Luk, Ernest S. L.; Wong, Ann S. Y.; Law, Lawrence S. C.; Ho, Karen K. Y. 2013-01-01 Objective: Unlike rating scales that focus on the severity of ADHD symptoms, the Strengths and Weaknesses of ADHD-Symptoms and Normal-Behaviors (SWAN) rating scale is phrased in neutral or positive terms for carers to compare the index child's behaviors with that of their peers. This study explores its psychometric properties when applied to… 18. Global Analysis of the Staphylococcus aureus Response to Mupirocin PubMed Central Reiß, Swantje; Pané-Farré, Jan; Fuchs, Stephan; François, Patrice; Liebeke, Manuel; Schrenzel, Jacques; Lindequist, Ulrike; Lalk, Michael; Wolz, Christiane; Hecker, Michael 2012-01-01 In the present study, we analyzed the response of S. aureus to mupirocin, the drug of choice for nasal decolonization. Mupirocin selectively inhibits the bacterial isoleucyl-tRNA synthetase (IleRS), leading to the accumulation of uncharged isoleucyl-tRNA and eventually the synthesis of (p)ppGpp. The alarmone (p)ppGpp induces the stringent response, an important global transcriptional and translational control mechanism that allows bacteria to adapt to nutritional deprivation. To identify proteins with an altered synthesis pattern in response to mupirocin treatment, we used the highly sensitive 2-dimensional gel electrophoresis technique in combination with mass spectrometry. The results were complemented by DNA microarray, Northern blot, and metabolome analyses. Whereas expression of genes involved in nucleotide biosynthesis, DNA metabolism, energy metabolism, and translation was significantly downregulated, expression of isoleucyl-tRNA synthetase, the branched-chain amino acid pathway, and genes with functions in oxidative-stress resistance (ahpC and katA) and putative roles in stress protection (the yvyD homologue SACOL0815 and SACOL1759 and SACOL2131) and transport processes was increased. A comparison of the regulated genes to known regulons suggests the involvement of the global regulators CodY and SigB in shaping the response of S. aureus to mupirocin. Of particular interest was the induced transcription of genes encoding virulence-associated regulators (i.e., arlRS, saeRS, sarA, sarR, sarS, and sigB), as well as genes directly involved in the virulence of S. aureus (i.e., fnbA, epiE, epiG, and seb). PMID:22106209 19. Evaluation of the strengths and weaknesses of community-based education from the viewpoint of students PubMed Central MOKHTARPOUR, SEDIGHEH; AMINI, MITRA; MOUSAVINEZHAD, HOURI; CHOOBINEH, ALIREZA; NABEIEI, PARISA 2016-01-01 Introduction: Responsive medicine is an appropriate training method which trains the graduates who can act effectively in initial and secondary aspects of health issues in the society. Methods: This was a cross-sectional descriptive-analytic study which was done using quantitative method. The target population of this study was all the students of the Nutrition and Health School of Shiraz University of Medical Sciences. The sample was randomly selected in this study and 75 students were selected based on the methodologist’s comments and similar studies and random-number table from a list obtained from the school’s department of education. This questionnaire was a researcher-made one which consisted of 23 questions in 2 sections with 21 closed-ended questions and 2 open-ended questions; 70 questionnaires were completed correctly. The closed-ended questions had 4 aspects (completely agree to completely disagree) answered in 5-point Likert scale type. Its face validity was confirmed by 4 faculty members. The construct validity of the questionnaire was analyzed by factor analysis test and its reliability was assessed by a pilot on 20 students with a Cronbach’s alpha of 0.85. The data were analyzed using descriptive statistical tests (mean, standard deviation, …) and the Pearson coefficient (p<0.001). Results: The results of this study showed that the maximum mean score was 3.58±0.65 which was related to the context of these courses and the minimum mean was 2.66±1.14 which was related to the logbook implementation. The 2 open-ended questions indicated that the most important strengths were the use of logbooks as a guide and determining the minimum training; of the weaknesses was the mismatch between the theoretical education and the practical activities. Also, developing the minimum training that an expert should know and using the common topics related to theoretical education were the most important points mentioned by the respondents. Conclusions: The 20. The Strength of Weak Identities: Social Structural Sources of Self, Situation and Emotional Experience ERIC Educational Resources Information Center Smith-Lovin, Lynn 2007-01-01 Modern societies are highly differentiated, with relatively uncorrelated socially salient dimensions and a preponderance of weak, unidimensional (as opposed to strong, multiplex) ties. What are the implications of a society with fewer strong ties and more weak ties for the self? What do these changes mean for our emotional experience in everyday… 1. Analysis of Strengths, Weaknesses, Opportunities, and Threats as a Tool for Translating Evidence into Individualized Medical Strategies (I-SWOT) PubMed Central von Kodolitsch, Yskert; Bernhardt, Alexander M.; Robinson, Peter N.; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian 2015-01-01 Background It is the physicians’ task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. Methods We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise “SO” maximizing strengths and opportunities, “WT” minimizing weaknesses and threats, “WO” minimizing weaknesses and maximizing opportunities, and “ST” maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. Results We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching “SW” with “OT”. As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. Conclusion I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies. PMID:27069939 2. While Heisenberg Is Not Looking: The Strength of "Weak Measurements" in Educational Research ERIC Educational Resources Information Center Geelan, David R. 2015-01-01 The concept of "weak measurements" in quantum physics is a way of "cheating" the Uncertainty Principle. Heisenberg stated (and 85 years of experiments have demonstrated) that it is impossible to know both the position and momentum of a particle with arbitrary precision. More precise measurements of one decrease the precision… 3. Scoring the Strengths and Weaknesses of Underage Drinking Laws in the United States PubMed Central Fell, James C.; Thomas, Sue; Scherer, Michael; Fisher, Deborah A.; Romano, Eduardo 2015-01-01 Several studies have examined the impact of a number of minimum legal drinking age 21 (MLDA-21) laws on underage alcohol consumption and alcohol-related crashes in the United States. These studies have contributed to our understanding of how alcohol control laws affect drinking and driving among those who are under age 21. However, much of the extant literature examining underage drinking laws use a “Law/No law” coding which may obscure the variability inherent in each law. Previous literature has demonstrated that inclusion of law strengths may affect outcomes and overall data fit when compared to “Law/No law” coding. In an effort to assess the relative strength of states’ underage drinking legislation, a coding system was developed in 2006 and applied to 16 MLDA-21 laws. The current article updates the previous endeavor and outlines a detailed strength coding mechanism for the current 20 MLDA-21 laws. PMID:26097775 4. Registration of weak ULF/ELF oscillations of the surface electric field strength NASA Astrophysics Data System (ADS) Boldyrev, A. I.; Vyazilov, A. E.; Ivanov, V. N.; Kemaev, R. V.; Korovin, V. Ya.; Melyashinskii, A. V.; Pamukhin, K. V.; Panov, V. N.; Shvyrev, Yu. N. 2016-07-01 Measurements of the atmospheric electric field strength made by an electrostatic fluxmeter with a unique threshold sensitivity for such devices (6 × 10-2-10-3 V m-1 Hz-1/2 in the 10-3-25 Hz frequency range) and wide dynamic (120 dB) and spectral (0-25 Hz) ranges, are presented. The device parameters make it possible to observe the electric component of global electromagnetic Schumann resonances and long-period fluctuations in the atmospheric electric field strength. 5. Effect of solubilizing agents on mupirocin loading into and release from PEGylated nanoliposomes. PubMed Cern, Ahuva; Nativ-Roth, Einat; Goldblum, Amiram; Barenholz, Yechezkel 2014-07-01 Mupirocin was identified by quantitative structure property relationship models as a good candidate for remote liposomal loading. Mupirocin is an antibiotic that is currently restricted to topical administration because of rapid hydrolysis in vivo to its inactive metabolite. Formulating mupirocin in PEGylated nanoliposomes may potentially expand its use to parenteral administration by protecting it from degradation in the circulation and target it (by the enhanced permeability effect) to the infected tissue. Mupirocin is slightly soluble in aqueous medium and its solubility can be increased using solubilizing agents. The effect of the solubilizing agents on mupirocin remote loading was studied when the solubilizing agents were added to the drug loading solution. Propylene glycol was found to increase mupirocin loading, whereas polyethylene glycol 400 showed no effect. Hydroxypropyl-β-cyclodextrin (HPCD) showed a concentration-dependent effect on mupirocin loading; using the optimal HPCD concentration increased loading, but higher concentrations inhibited it. The inclusion of HPCD in the liposome aqueous phase while forming the liposomes resulted in increased drug loading and substantially inhibited drug release in serum. 6. The expression and interpretation of uncertain forensic science evidence: verbal equivalence, evidence strength, and the weak evidence effect. PubMed Martire, Kristy A; Kemp, Richard I; Watkins, Ian; Sayle, Malindi A; Newell, Ben R 2013-06-01 Standards published by the Association of Forensic Science Providers (2009, Standards for the formulation of evaluative forensic science expert opinion, Science & Justice, Vol. 49, pp. 161-164) encourage forensic scientists to express their conclusions in the form of a likelihood ratio (LR), in which the value of the evidence is conveyed verbally or numerically. In this article, we report two experiments (using undergraduates and Mechanical Turk recruits) designed to investigate how much decision makers change their beliefs when presented with evidence in the form of verbal or numeric LRs. In Experiment 1 (N = 494), participants read a summary of a larceny trial containing inculpatory expert testimony in which evidence strength (low, moderate, high) and presentation method (verbal, numerical) varied. In Experiment 2 (N = 411), participants read the same larceny trial, this time including either exculpatory or inculpatory expert evidence that varied in strength (low, high) and presentation method (verbal, numerical). Both studies found a reasonable degree of correspondence in observed belief change resulting from verbal and numeric formats. However, belief change was considerably smaller than Bayesian calculations would predict. In addition, participants presented with evidence weakly supporting guilt tended to "invert" the evidence, thereby counterintuitively reducing their belief in the guilt of the accused. This "weak evidence effect" was most apparent in the verbal presentation conditions of both experiments, but only when the evidence was inculpatory. These findings raise questions about the interpretability of LRs by jurors and appear to support an expectancy-based account of the weak evidence effect. 7. A Review of Meta-Analyses in Education: Methodological Strengths and Weaknesses ERIC Educational Resources Information Center Ahn, Soyeon; Ames, Allison J.; Myers, Nicholas D. 2012-01-01 The current review addresses the validity of published meta-analyses in education that determines the credibility and generalizability of study findings using a total of 56 meta-analyses published in education in the 2000s. Our objectives were to evaluate the current meta-analytic practices in education, identify methodological strengths and… 8. Gifted Students with Spatial Strengths and Sequential Weaknesses: An Overlooked and Underidentified Population ERIC Educational Resources Information Center Mann, Rebecca L. 2005-01-01 Gifted students with spatial strengths are often overlooked and underserved in American schools. These students have remarkable areas of talent but often have verbal learning difficulties that prevent them from being identified for gifted services. This article focuses on definitions of spatial ability, characteristics of these learners, possible… 9. Identification of chromosomal location of mupA gene, encoding low-level mupirocin resistance in staphylococcal isolates. PubMed Central Ramsey, M A; Bradley, S F; Kauffman, C A; Morton, T M 1996-01-01 Low- and high-level mupirocin resistance have been reported in Staphylococcus aureus. The expression of plasmid-encoded mupA is responsible for high-level mupirocin resistance. Low-level mupirocin-resistant strains do not contain plasmid-encoded mupA, and a chromosomal location for this gene has not previously been reported. We examined high- and low-level mupirocin-resistant S. aureus strains to determine if mupA was present on the chromosome of low-level-resistant isolates. Southern blot analysis of DNA from four mupirocin-resistant strains identified mupA in both high- and low-level mupirocin-resistant strains. Low-level mupirocin-resistant strains contained a copy of mupA on the chromosome, while the high-level mupirocin-resistant isolate contained a copy of the gene on the plasmid. PCR amplification of genomic DNA from each mupirocin-resistant strain resulted in a 1.65-kb fragment, the predicted product from the intragenic mupA primers. This is the first report of a chromosomal location for the mupA gene conferring low-level mupirocin resistance. PMID:9124848 10. Strength of weak layers in cascading failures on multiplex networks: case of the international trade network. PubMed Lee, Kyu-Min; Goh, K-I 2016-01-01 Many real-world complex systems across natural, social, and economical domains consist of manifold layers to form multiplex networks. The multiple network layers give rise to nonlinear effect for the emergent dynamics of systems. Especially, weak layers that can potentially play significant role in amplifying the vulnerability of multiplex networks might be shadowed in the aggregated single-layer network framework which indiscriminately accumulates all layers. Here we present a simple model of cascading failure on multiplex networks of weight-heterogeneous layers. By simulating the model on the multiplex network of international trades, we found that the multiplex model produces more catastrophic cascading failures which are the result of emergent collective effect of coupling layers, rather than the simple sum thereof. Therefore risks can be systematically underestimated in single-layer network analyses because the impact of weak layers can be overlooked. We anticipate that our simple theoretical study can contribute to further investigation and design of optimal risk-averse real-world complex systems. PMID:27211291 11. Strength of weak layers in cascading failures on multiplex networks: case of the international trade network PubMed Central Lee, Kyu-Min; Goh, K.-I. 2016-01-01 Many real-world complex systems across natural, social, and economical domains consist of manifold layers to form multiplex networks. The multiple network layers give rise to nonlinear effect for the emergent dynamics of systems. Especially, weak layers that can potentially play significant role in amplifying the vulnerability of multiplex networks might be shadowed in the aggregated single-layer network framework which indiscriminately accumulates all layers. Here we present a simple model of cascading failure on multiplex networks of weight-heterogeneous layers. By simulating the model on the multiplex network of international trades, we found that the multiplex model produces more catastrophic cascading failures which are the result of emergent collective effect of coupling layers, rather than the simple sum thereof. Therefore risks can be systematically underestimated in single-layer network analyses because the impact of weak layers can be overlooked. We anticipate that our simple theoretical study can contribute to further investigation and design of optimal risk-averse real-world complex systems. PMID:27211291 12. Strength of weak layers in cascading failures on multiplex networks: case of the international trade network NASA Astrophysics Data System (ADS) Lee, Kyu-Min; Goh, K.-I. 2016-05-01 Many real-world complex systems across natural, social, and economical domains consist of manifold layers to form multiplex networks. The multiple network layers give rise to nonlinear effect for the emergent dynamics of systems. Especially, weak layers that can potentially play significant role in amplifying the vulnerability of multiplex networks might be shadowed in the aggregated single-layer network framework which indiscriminately accumulates all layers. Here we present a simple model of cascading failure on multiplex networks of weight-heterogeneous layers. By simulating the model on the multiplex network of international trades, we found that the multiplex model produces more catastrophic cascading failures which are the result of emergent collective effect of coupling layers, rather than the simple sum thereof. Therefore risks can be systematically underestimated in single-layer network analyses because the impact of weak layers can be overlooked. We anticipate that our simple theoretical study can contribute to further investigation and design of optimal risk-averse real-world complex systems. 13. Strengths, weaknesses, opportunities and threats of the pig health monitoring systems used in England. PubMed Stärk, K D C; Nevel, A 2009-10-17 Several systems are being used in England to record information about the health of pigs. The British Pig Health Scheme (BPHS), the National Animal Disease Information System (NADIS), the Zoonoses Action Plan (ZAP) for Salmonella and the Veterinary Investigation Diagnosis Analysis (VIDA) system have been assessed to make recommendations for their future separate or joint development. The structure, organisation, processes, data quality, dissemination, utilisation and acceptance of each system have been assessed. Information was extracted from documents and websites, and informal interviews were conducted with technical experts and stakeholders. The systems covered a broad range of objectives, used variable approaches and operated at very different scales and budgets. There was a high level of awareness and involvement by the industry. Common weaknesses of the systems were the lack of in-depth quantitative analysis of the data, the lack of assessment of each system's impact, and the unknown level of bias as a result of the voluntary or selective participation in them. PMID:19850852 14. The Strengths and Weaknesses of Logic Formalisms to Support Mishap Analysis NASA Technical Reports Server (NTRS) Johnson, C. W.; Holloway, C. M. 2002-01-01 The increasing complexity of many safety critical systems poses new problems for mishap analysis. Techniques developed in the sixties and seventies cannot easily scale-up to analyze incidents involving tightly integrated software and hardware components. Similarly, the realization that many failures have systemic causes has widened the scope of many mishap investigations. Organizations, including NASA and the NTSB, have responded by starting research and training initiatives to ensure that their personnel are well equipped to meet these challenges. One strand of research has identified a range of mathematically based techniques that can be used to reason about the causes of complex, adverse events. The proponents of these techniques have argued that they can be used to formally prove that certain events created the necessary and sufficient causes for a mishap to occur. Mathematical proofs can reduce the bias that is often perceived to effect the interpretation of adverse events. Others have opposed the introduction of these techniques by identifying social and political aspects to incident investigation that cannot easily be reconciled with a logic-based approach. Traditional theorem proving mechanisms cannot accurately capture the wealth of inductive, deductive and statistical forms of inference that investigators routinely use in their analysis of adverse events. This paper summarizes some of the benefits that logics provide, describes their weaknesses, and proposes a number of directions for future research. 15. Control of an outbreak of an epidemic methicillin-resistant Staphylococcus aureus also resistant to mupirocin. PubMed Irish, D; Eltringham, I; Teall, A; Pickett, H; Farelly, H; Reith, S; Woodford, N; Cookson, B 1998-05-01 An epidemic methicillin-resistant Staphlococcus aureus (EMRSA-3) appeared in a District hospital in June 1989 as part of a regional outbreak. The dynamics of the outbreak were complex and involved patient transfer between hospitals and wards. Control measures followed UK guidelines and included the use of nasal mupirocin. During these efforts a mupirocin-resistant MRSA [MuMRSA: mupirocin minimum inhibitor concentration (MIC) > 256 mg/L] emerged, probably in a patient who had been given eight mupirocin courses over nine months. The MuMRSA had a narrower phage-typing pattern than EMRSA-3, but was indistinguishable by pulsed-field gel electrophoresis of SmaI chromosomal restriction enzyme digests and its susceptibility pattern to other antibiotics. The results of in vitro curing and gene probing indicated that mupirocin resistance was encoded on a 48 Md plasmid. MuMRSA spread occurred in 12 patients and 11 staff. The affected patients were nursed on the same ward. The strain was eradicated from patients with oral ciprofloxacin and rifampicin, triclosan skin treatment and nasal fusidic acid and bacitracin cream. The control of the outbreak had significant medical, social and financial implications. Fortunately, there were alternative topical agents to mupirocin, an agent which has played such a key role in MRSA eradication in recent years. 16. A new medium containing mupirocin, acetic acid, and norfloxacin for the selective cultivation of bifidobacteria. PubMed Vlková, Eva; Salmonová, Hana; Bunešová, Věra; Geigerová, Martina; Rada, Vojtěch; Musilová, Šárka 2015-08-01 Various culture media have been proposed for the isolation and selective enumeration of bifidobacteria. Mupirocin is widely used as a selective factor along with glacial acetic acid. TOS (transgalactosylated oligosaccharides) medium supplemented with mupirocin is recommended by the International Dairy Federation for the detection of bifidobacteria in fermented milk products. Mupirocin media with acetic acid are also reliable for intestinal samples in which bifidobacteria predominate. However, for complex samples containing more diverse microbiota, the selectivity of mupirocin media is limited. Resistance to mupirocin has been demonstrated by many anaerobic bacteria, especially clostridia. The objective was to identify an antibiotic that inhibits the growth of clostridia and allows the growth of bifidobacteria, and to use the identified substance to develop a selective cultivation medium for bifidobacteria. The susceptibility of bifidobacteria and clostridia to 12 antibiotics was tested on agar using the disk diffusion method. Only norfloxacin inhibited the growth of clostridia and did not affect the growth of bifidobacteria. Using both pure cultures and faecal samples from infants, adults, calves, lambs, and piglets, the optimal concentration of norfloxacin in solid cultivation media was determined to be 200 mg/L. Our results showed that solid medium containing norfloxacin (200 mg/L) in combination with mupirocin (100 mg/L) and glacial acetic acid (1 mL/L) is suitable for the enumeration and isolation of bifidobacteria from faecal samples of different origins. 17. A new medium containing mupirocin, acetic acid, and norfloxacin for the selective cultivation of bifidobacteria. PubMed Vlková, Eva; Salmonová, Hana; Bunešová, Věra; Geigerová, Martina; Rada, Vojtěch; Musilová, Šárka 2015-08-01 Various culture media have been proposed for the isolation and selective enumeration of bifidobacteria. Mupirocin is widely used as a selective factor along with glacial acetic acid. TOS (transgalactosylated oligosaccharides) medium supplemented with mupirocin is recommended by the International Dairy Federation for the detection of bifidobacteria in fermented milk products. Mupirocin media with acetic acid are also reliable for intestinal samples in which bifidobacteria predominate. However, for complex samples containing more diverse microbiota, the selectivity of mupirocin media is limited. Resistance to mupirocin has been demonstrated by many anaerobic bacteria, especially clostridia. The objective was to identify an antibiotic that inhibits the growth of clostridia and allows the growth of bifidobacteria, and to use the identified substance to develop a selective cultivation medium for bifidobacteria. The susceptibility of bifidobacteria and clostridia to 12 antibiotics was tested on agar using the disk diffusion method. Only norfloxacin inhibited the growth of clostridia and did not affect the growth of bifidobacteria. Using both pure cultures and faecal samples from infants, adults, calves, lambs, and piglets, the optimal concentration of norfloxacin in solid cultivation media was determined to be 200 mg/L. Our results showed that solid medium containing norfloxacin (200 mg/L) in combination with mupirocin (100 mg/L) and glacial acetic acid (1 mL/L) is suitable for the enumeration and isolation of bifidobacteria from faecal samples of different origins. PMID:25865525 18. Lessons from Dwarf8 on the Strengths and Weaknesses of Structured Association Mapping PubMed Central Larsson, Sara J.; Lipka, Alexander E.; Buckler, Edward S. 2013-01-01 The strengths of association mapping lie in its resolution and allelic richness, but spurious associations arising from historical relationships and selection patterns need to be accounted for in statistical analyses. Here we reanalyze one of the first generation structured association mapping studies of the Dwarf8 (d8) locus with flowering time in maize using the full range of new mapping populations, statistical approaches, and haplotype maps. Because this trait was highly correlated with population structure, we found that basic structured association methods overestimate phenotypic effects in the region, while mixed model approaches perform substantially better. Combined with analysis of the maize nested association mapping population (a multi-family crossing design), it is concluded that most, if not all, of the QTL effects at the general location of the d8 locus are from rare extended haplotypes that include other linked QTLs and that d8 is unlikely to be involved in controlling flowering time in maize. Previous independent studies have shown evidence for selection at the d8 locus. Based on the evidence of population bottleneck, selection patterns, and haplotype structure observed in the region, we suggest that multiple traits may be strongly correlated with population structure and that selection on these traits has influenced segregation patterns in the region. Overall, this study provides insight into how modern association and linkage mapping, combined with haplotype analysis, can produce results that are more robust. PMID:23437002 19. OECI accreditation of the European Institute of Oncology of Milan: strengths and weaknesses. PubMed Deriu, Pietro L; Basso, Silvia; Mastrilli, Fabrizio; Orecchia, Roberto 2015-01-01 The European Institute of Oncology began the process to reach the accreditation promoted by the Organisation of European Cancer Institutes (OECI) in 2012. This accreditation integrates the quality and safety path started in 2001 with accreditation by the Joint Commission International. Despite the presence of diversified accreditations and certifications and the clear need of time, effort, and commitment, the models are complementary. Each model is not to be considered as an end but as a tool for improvement: e.g., mixing accreditation standards led to an improvement in the quality and safety of processes. The present article details the OECI accreditation experience of the European Institute of Oncology, in particular the following strengths of OECI standards: collaboration among several involved parties (patient, volunteer, patient's general practitioner) in the clinical and quality/safety processes; a larger involvement of support personnel (psycho-oncologists, dieticians, physical therapists); and the development of clinical/translational research and innovation in prevention, diagnosis, and treatment to guarantee the best available practice in diagnosis and treatment. The OECI accreditation is specific to oncology and therefore its standards are tailored to a cancer center, both in terms of language used in the standards manual and in terms of patient needs. The OECI accreditation system puts an auditor team with a standards manual in charge of verifying quality and confirms the definition of IEO as a Comprehensive Cancer Center. PMID:27096268 20. Chlorhexidine and mupirocin susceptibilities in methicillin-resistant Staphylococcus aureus isolates from bacteraemia and nasal colonisation. PubMed Muñoz-Gallego, Irene; Infiesta, Lucia; Viedma, Esther; Perez-Montarelo, Dafne; Chaves, Fernando 2016-03-01 Chlorhexidine and mupirocin have been increasingly used in healthcare facilities to eradicate methicillin-resistant Staphylococcus aureus (MRSA) carriage. The aim of this study was to determine the prevalence and mechanisms of chlorhexidine and mupirocin resistance in MRSA from invasive infections and colonisation. MRSA isolates obtained from blood and nasal samples between 2012 and 2014 were analysed. Susceptibility to mupirocin was determined by disk diffusion and Etest and susceptibility to chlorhexidine by broth microdilution. The presence of mupA and qac (A/B and C) genes was investigated by PCR. Molecular typing was performed in high-level mupirocin-resistant (HLMR) isolates. Mupirocin resistance was identified in 15.6% of blood isolates (10.9% HLMR) and 15.1% of nasal isolates (12.0% HLMR). Presence of the mupA gene was confirmed in all HLMR isolates. For blood isolates, chlorhexidine minimum inhibitory concentrations (MICs) ranged from ≤0.125 to 4mg/L and minimum bactericidal concentrations (MBCs) from ≤0.125 to 8mg/L. In nasal isolates, chlorhexidine MICs and MBCs ranged from ≤0.125 to 2mg/L. The qacA/B gene was detected in 2.2% of MRSA isolates (chlorhexidine MIC range 0.25-2mg/L) and the qacC gene in 8.2% (chlorhexidine MIC range ≤0.125-1mg/L). The prevalence of qacC was 18.9% in HLMR isolates and 3.6% in mupirocin-susceptible isolates (P=0.009). Most of the HLMR isolates (97.1%) belonged to ST125 clone. These results suggest that chlorhexidine has a higher potential to prevent infections caused by MRSA. In contrast, mupirocin treatment should be used cautiously to avoid the spread of HLMR MRSA. PMID:27436397 1. Weak hydrogen bridges: a systematic theoretical study on the nature and strength of C--H...F--C interactions. PubMed Hyla-Kryspin, Isabella; Haufe, Günter; Grimme, Stefan 2004-07-19 We present a comparative study on the nature and strength of weak hydrogen bonding between the C(sp3)-H, C(sp2)-H, and C(sp)-H donor bonds and F-C(sp3) acceptors. The series of molecules CH3F.CH4 (2 a, 2 b), CH3F.C2H4 (3), CH3F.C2H2 (4), as well as model complexes of experimentally characterized 2-fluoro-2-phenylcyclopropane derivatives, C3H6.C3H5F (5 a, 5 b) and C3H5F.C3H5F (6) were investigated. Comparative studies were also performed for two conformers of the methane dimer (1 a, 1 b). The calculations were carried out in hierarchies of basis sets [SV(d,p), TZV(d,p), aug-TZV(d,p), TZV(2df,2pd), aug-TZV(2df,2pd), QZV(3d2fg,2pd), aug-QZV(3d2fg,2pdf)] by means of ab initio [HF, MP2, QCISD, QCISD(T)] methods and density functional theory (DFT/B3LYP, DFT/PBE). It is shown that well-balanced basis sets of at least TZV(2df,2pd) quality are needed for a proper description of the weakly bonded systems. In the case of 2, 3, 5, and 6, the dispersion interaction is the dominant term of the entire attraction, which is not accounted for at the B3LYP level. Significant electrostatic contributions are observed for 6 and 3. For 4, these forces have a dominating contribution to the hydrogen bonding. The C(sp)--H...F--C(sp(3)) interaction in 4, though weak, exhibits the same characteristics as conventional hydrogen bridges. Despite showing longer H.F/H contacts compared to 1 a, 2 a, and 5 a the bifurcated structures, 1 b, 2 b, 5 b, are characterized by larger dispersion interactions leading to stronger bonding. For the systems with only one H.F contact, the MP2/QZV(3d2fg,2pd) interaction energy increases in the order 2 a (-1.62 kJ mol(-1)), 3 (-2.79 kJ mol(-1)), 5 a (-5.97 kJ mol(-1)), 4 (-7.25 kJ mol(-1)), and 6 (-10.02 kJ mol(-1)). This contradicts the estimated proton donor ability of the C--H bonds (2 a<5 a<3<6<4). 2. Mupirocin-mucin agar for selective enumeration of Bifidobacterium bifidum. PubMed Pechar, Radko; Rada, Vojtech; Parafati, Lucia; Musilova, Sarka; Bunesova, Vera; Vlkova, Eva; Killer, Jiri; Mrazek, Jakub; Kmet, Vladimir; Svejstil, Roman 2014-11-17 Bifidobacterium bifidum is a bacterial species exclusively found in the human intestinal tract. This species is becoming increasingly popular as a probiotic organism added to lyophilized products. In this study, porcine mucin was used as the sole carbon source for the selective enumeration of B. bifidum in probiotic food additives. Thirty-six bifidobacterial strains were cultivated in broth with mucin. Only 13 strains of B. bifidum utilized the mucin to produce acids. B. bifidum was selectively enumerated in eight probiotic food supplements using agar (MM agar) containing mupirocin (100 mg/L) and mucin (20 g/L) as the sole carbon source. MM agar was fully selective if the B. bifidum species was presented together with Bifidobacterium animalis subsp. lactis, Bifidobacterium breve, and Bifidobacterium longum subsp. longum species and with lactic acid bacteria (lactobacilli, streptococci). Isolated strains of B. bifidum were identified using biochemical, PCR, MALDI-TOF procedures and 16S rRNA gene sequencing. The novel selective medium was also suitable for the isolation of B. bifidum strains from human fecal samples. 3. Patient experience with mupirocin or povidone-iodine nasal decolonization. PubMed Maslow, Jed; Hutzler, Lorraine; Cuff, Germaine; Rosenberg, Andrew; Phillips, Michael; Bosco, Joseph 2014-06-01 Led by the federal government, the payers of health care are enacting policies designed to base provider reimbursement on the quality of care they render. This study evaluated and compared patient experiences and satisfaction with nasal decolonization with either nasal povidone-iodine (PI) or nasal mupirocin ointment (MO). A total of 1903 patients were randomized to undergo preoperative nasal decolonization with either nasal MO or PI solution. All randomized patients were also given 2% chlorhexidine gluconate topical wipes. Patients were interviewed prior to discharge to assess adverse events and patient experience with their assigned preoperative antiseptic protocol. Of the 1903 randomized patients, 1679 (88.1%) were interviewed prior to discharge. Of patients receiving PI, 3.4% reported an unpleasant or very unpleasant experience, compared with 38.8% of those using nasal MO (P<.0001). Sixty-seven percent of patients using nasal MO believed it to be somewhat or very helpful in reducing surgical site infections, compared with 71% of patients receiving PI (P>.05). Being recruited as an active participant in surgical site infection prevention was a positive experience for 87.2% of MO patients and 86.3% of PI patients (P=.652). Those assigned to receive PI solution preoperatively reported significantly fewer adverse events than the nasal MO group (P<.01). Preoperative nasal decolonization with either nasal PI or MO was considered somewhat or very helpful by more than two-thirds of patients. PMID:24972440 4. Mupirocin-mucin agar for selective enumeration of Bifidobacterium bifidum. PubMed Pechar, Radko; Rada, Vojtech; Parafati, Lucia; Musilova, Sarka; Bunesova, Vera; Vlkova, Eva; Killer, Jiri; Mrazek, Jakub; Kmet, Vladimir; Svejstil, Roman 2014-11-17 Bifidobacterium bifidum is a bacterial species exclusively found in the human intestinal tract. This species is becoming increasingly popular as a probiotic organism added to lyophilized products. In this study, porcine mucin was used as the sole carbon source for the selective enumeration of B. bifidum in probiotic food additives. Thirty-six bifidobacterial strains were cultivated in broth with mucin. Only 13 strains of B. bifidum utilized the mucin to produce acids. B. bifidum was selectively enumerated in eight probiotic food supplements using agar (MM agar) containing mupirocin (100 mg/L) and mucin (20 g/L) as the sole carbon source. MM agar was fully selective if the B. bifidum species was presented together with Bifidobacterium animalis subsp. lactis, Bifidobacterium breve, and Bifidobacterium longum subsp. longum species and with lactic acid bacteria (lactobacilli, streptococci). Isolated strains of B. bifidum were identified using biochemical, PCR, MALDI-TOF procedures and 16S rRNA gene sequencing. The novel selective medium was also suitable for the isolation of B. bifidum strains from human fecal samples. PMID:25217723 5. Mupirocin-induced mutations in ileS in various genetic backgrounds of methicillin-resistant Staphylococcus aureus. PubMed Lee, Andie S; Gizard, Yann; Empel, Joanna; Bonetti, Eve-Julie; Harbarth, Stephan; François, Patrice 2014-10-01 Topical mupirocin is widely used for the decolonization of methicillin-resistant Staphylococcus aureus (MRSA) carriers. We evaluated the capacity of various MRSA clonotypes to develop mutations in the ileS gene associated with low-level mupirocin resistance. Twenty-four mupirocin-sensitive MRSA isolates from a variety of genotypes (determined by a multilocus variable-number tandem-repeat assay) were selected. Mupirocin MICs were determined by Etest. The isolates were then incubated in subinhibitory concentrations of mupirocin for 7 to 14 days. Repeat MIC determinations and sequencing of the ileS gene were then performed. Doubling times of isolates exposed to mupirocin and of unexposed isolates were compared. We found that exposure to mupirocin led to rapid induction of low-level resistance (MICs of 8 to 24 μg/ml) in 11 of 24 (46%) MRSA isolates. This phenomenon was observed in strains with diverse genetic backgrounds. Various mutations were detected in 18 of 24 (75%) MRSA isolates. Acquisition of mutations appeared to be a stepwise process during prolonged incubation with the drug. Among the five isolates exhibiting low-level resistance and the highest MICs, four tested sensitive after incubation in the absence of mupirocin but there was no reversion to the susceptible wild-type primary sequence. Resistance was not associated with significant fitness cost, suggesting that MRSA strains with low-level mupirocin resistance may have a selective advantage in facilities where mupirocin is commonly used. Our findings emphasize the importance of the judicious use of this topical agent and the need to closely monitor for the emergence of resistance. 6. The Effect of Achievement Test Selection on Identification of Learning Disabilities within a Patterns of Strengths and Weaknesses Framework PubMed Central Miciak, Jeremy; Taylor, Pat; Denton, Carolyn A.; Fletcher, Jack M. 2014-01-01 Purpose Few empirical investigations have evaluated learning disabilities (LD) identification methods based on a pattern of cognitive strengths and weaknesses (PSW). This study investigated the reliability of LD classification decisions of the concordance/discordance method (C/DM) across different psychoeducational assessment batteries. Methods C/DM criteria were applied to assessment data from 177 second grade students based on two psychoeducational assessment batteries. The achievement tests were different, but were highly correlated and measured the same latent construct. Resulting LD identifications were then evaluated for agreement across batteries on LD status and the academic domain of eligibility. Results The two batteries identified a similar number of participants as having LD (80 and 74). However, indices of agreement for classification decisions were low (kappa = .29), especially for percent positive agreement (62%). The two batteries demonstrated agreement on the academic domain of eligibility for only 25 participants. Conclusions Cognitive discrepancy frameworks for LD identification are inherently unstable because of imperfect reliability and validity at the observed level. Methods premised on identifying a PSW profile may never achieve high reliability because of these underlying psychometric factors. An alternative is to directly assess academic skills to identify students in need of intervention. PMID:25243467 7. Use of mupirocin-chlorhexidine treatment to prevent Staphylococcus aureus surgical-site infections. PubMed Bertrand, X; Slekovec, C; Talon, D 2010-05-01 Evaluation of: Bode LGM, Kluytmans JAJW, Wertheim HFL et al.: Preventing surgical-site infections in nasal carriers of Staphylococcus aureus. N. Engl. J. Med. 362, 9-17 (2010). Staphylococcus aureus is the main pathogen responsible for surgical-site infections and nasal carriage is a major risk factor for subsequent infection with this bacteria. Mupirocin is considered to be the topical antibacterial agent of choice for eradication of nasal S. aureus. The paper by Bode et al. provides strong evidence that the combination of a rapid identification of a S. aureus nasal carrier, mupirocin nasal ointment and chlorhexidine gluconate soap, significantly reduces the rate of S. aureus surgical-site infection by nearly 60%. In conclusion, mupirocin nasal ointment use in S. aureus carriers before surgery has numerous advantages with few side effects. PMID:20441543 8. Impact of mupirocin resistance on the transmission and control of healthcare-associated MRSA PubMed Central Deeny, Sarah R.; Worby, Colin J.; Tosas Auguet, Olga; Cooper, Ben S.; Edgeworth, Jonathan; Cookson, Barry; Robotham, Julie V. 2015-01-01 Objectives The objectives of this study were to estimate the relative transmissibility of mupirocin-resistant (MupR) and mupirocin-susceptible (MupS) MRSA strains and evaluate the long-term impact of MupR on MRSA control policies. Methods Parameters describing MupR and MupS strains were estimated using Markov chain Monte Carlo methods applied to data from two London teaching hospitals. These estimates parameterized a model used to evaluate the long-term impact of MupR on three mupirocin usage policies: ‘clinical cases’, ‘screen and treat’ and ‘universal’. Strategies were assessed in terms of colonized and infected patient days and scenario and sensitivity analyses were performed. Results The transmission probability of a MupS strain was 2.16 (95% CI 1.38–2.94) times that of a MupR strain in the absence of mupirocin usage. The total prevalence of MupR in colonized and infected MRSA patients after 5 years of simulation was 9.1% (95% CI 8.7%–9.6%) with the ‘screen and treat’ mupirocin policy, increasing to 21.3% (95% CI 20.9%–21.7%) with ‘universal’ mupirocin use. The prevalence of MupR increased in 50%–75% of simulations with ‘universal’ usage and >10% of simulations with ‘screen and treat’ usage in scenarios where MupS had a higher transmission probability than MupR. Conclusions Our results provide evidence from a clinical setting of a fitness cost associated with MupR in MRSA strains. This provides a plausible explanation for the low levels of mupirocin resistance seen following ‘screen and treat’ mupirocin usage. From our simulations, even under conservative estimates of relative transmissibility, we see long-term increases in the prevalence of MupR given ‘universal’ use. PMID:26338047 9. An Examination of the Strengths, Weaknesses, Opportunities, and Threats Associated with the Adoption of Moodle[TM] by eXtension ERIC Educational Resources Information Center Hightower, Tayla Elise; Murphrey, Theresa Pesl; Coppernoll, Susanna Mumm; Jahedkar, Jennifer; Dooley, Kim E. 2011-01-01 The use of technology to deliver programming across Extension has been addressed widely; however, little research has been conducted concerning the use of Moodle[TM] as a course management system for Extension. The purpose of the study reported here was to identify the strengths, weaknesses, opportunities, and threats associated with the use of… 10. Special Education and Rehabilitation in Georgia: Strengths, Weakness, Opportunities, and Threats in a Newly-Independent State of the Former Soviet Union. ERIC Educational Resources Information Center Hobbs, Tim; Szydlowski, Steven; West, Daniel, Jr.; Germava, Otar 2002-01-01 Forty-nine Georgian professionals from the fields of health, education, and rehabilitation were brought together for a week-long workshop to discuss issues related to disability, rehabilitation, and special education. Workshop activities included a SWOT (strengths, weaknesses, opportunities, and threats) analysis of special education in Georgia.… 11. A Study of Strengths and Weaknesses of Descriptive Assessment from Principals, Teachers and Experts Points of View in Chaharmahal and Bakhteyari Primary Schools ERIC Educational Resources Information Center Sharief, Mostafa; Naderi, Mahin; Hiedari, Maryam Shoja; Roodbari, Omolbanin; Jalilvand, Mohammad Reza 2012-01-01 The aim of current study is to determine the strengths and weaknesses of descriptive evaluation from the viewpoint of principals, teachers and experts of Chaharmahal and Bakhtiari province. A descriptive survey was performed. Statistical population includes 208 principals, 303 teachers, and 100 executive experts of descriptive evaluation scheme in… 12. Wildfire Prevention and Suppression plans enhancing: a first overview on strength and weakness in Italian stakeholders experiences and perception. NASA Astrophysics Data System (ADS) Bonora, Laura; Conese, Claudio; Barbati, Anna 2014-05-01 Fires and wildfires represent an element of vulnerability for forests, considering that have now reached a level beyond which further burning would seriously endanger the ecosystem services and their sustainable management. It is fundamental to support fire-fighting Centres by giving them tools, useful to faces future trends; in this sense the first step is to examine technical and operative procedures to evaluate their strong and weak aspects, in collaboration with personnel responsible of risk management, suppression coordination and patrol responsible of direct attack. The aims this work is to identify present elements of strength ad problematic aspects to tuning the wildfire suppression actions to future changes; this is a crucial challenge both for policy and territory planners and managers. Historical investigation lines on forest fire covered the basilar and fundamental dynamics which understanding was necessary to confine and fight the wildfire phenomenon. At the present all the competences, knowledge and connections acquired are translating and including in the Plans, sharing innovative strategies -with the "direct involved actors"- trying to decrease the fire trend. Stakeholders underlined that collaboration between research and territorial Institutions are producing positive results, showing the conceptual rightness and the well-run of the in-progress implementations. The Italian framework of wildfire prevention plans is very peculiar because the Plans related to prevention and active intervention procedure are coincident. Normative, procedural, economic and logistic aspects are considered and handled in the same general document; each year the local structures, designed by the Regions, have in charge the draft of the operative plan, defining and managing the means and patrols distribution and turnover. In the present work 3 Italian Regions (Tuscany, Puglia and Sardinia, with different territorial and vegetation characteristics and affected by different 13. AERONET-OC: Strengths and Weaknesses of a Network for the Validation of Satellite Coastal Radiometric Products NASA Technical Reports Server (NTRS) Zibordi, Giuseppe; Holben, Brent; Slutsker, Ilya; Giles, David; D'Alimonte, Davide; Melin, Frederic; Berthon, Jean-Francois; Vandemark, Doug; Feng, Hui; Schuster, Gregory; Fabbri, Bryan E.; Kaitala, Seppo; Seppala, Jukka 2008-01-01 The Ocean Color component of the Aerosol Robotic Network (AERONET-OC) has been implemented to support long-term satellite ocean color investigations through cross-site consistent and accurate measurements collected by autonomous radiometer systems deployed on offshore fixed platforms. The ultimate purpose of AERONET-OC is the production of standardized measurements performed at different sites with identical measuring systems and protocols, calibrated using a single reference source and method, and processed with the same code. The AERONET-OC primary data product is the normalized water leaving radiance determined at center-wavelengths of interest for satellite ocean color applications, with an uncertainty lower than 5% in the blue-green spectral regions and higher than 8% in the red. Measurements collected at 6 sites counting the northern Adriatic Sea, the Baltic Proper, the Gulf of Finland, the Persian Gulf, and, the northern and southern margins of the Middle Atlantic Bay, have shown the capability of producing quality assured data over a wide range of bio-optical conditions including Case-2 yellow substance- and sedimentdominated waters. This work briefly introduces network elements like: deployment sites, measurement method, instrument calibration, processing scheme, quality-assurance, uncertainties, data archive and products accessibility. Emphases is given to those elements which underline the network strengths (i.e., mostly standardization of any network element) and its weaknesses (i.e., the use of consolidated, but old-fashioned technology). The work also addresses the application of AERONET-OC data to the validation of primary satellite radiometric products over a variety of complex coastal waters and finally provides elements for the identification of new deployment sites most suitable to support satellite ocean color missions. 14. Stability of mupirocin ointment (Bactroban) admixed with other proprietary dermatological products. PubMed Jagota, N K; Stewart, J T; Warren, F W; John, P M 1992-06-01 This study involved the mixing of 1:1 combinations of Bactroban (mupirocin) Ointment 2% with various cream, lotion, ointment, gel, solution and liquid soap formulations with storage at 37 degrees C for 60 days. The mixtures were assayed for mupirocin content at 0, 15, 30, 45 and 60 days using a high-pressure liquid chromatographic (HPLC) assay. At the time of preparation of these admixtures, Bactroban Ointment is chemically and physically compatible with all of the topical dermatological products studied except for Valisone lotion where a physical incompatibility is immediately observed. Admixtures of Hibiclens liquid soap or Lotrimin solution with Bactroban Ointment were stable throughout the entire 60-day study. Combinations of Lotrimin cream, Hytone cream, Valisone ointment or Vytone cream with Bactroban Ointment also retained chemical stability of mupirocin for the entire period even though two layers were observed and mixing was required to restore a physically homogenous mixture. Other Bactroban Ointment admixtures were found to be either chemically stable for mupirocin for periods less than 60 days or physically incompatible mixtures were observed upon storage. No conclusions were drawn from these studies concerning the efficacy or safety of any of these products when used in extemporaneously prepared combinations. 15. Strengths and weaknesses of weak-strong cluster problems: A detailed overview of state-of-the-art classical heuristics versus quantum approaches NASA Astrophysics Data System (ADS) Mandrà, Salvatore; Zhu, Zheng; Wang, Wenlong; Perdomo-Ortiz, Alejandro; Katzgraber, Helmut G. 2016-08-01 To date, a conclusive detection of quantum speedup remains elusive. Recently, a team by Google Inc. [V. S. Denchev et al., Phys. Rev. X 6, 031015 (2016), 10.1103/PhysRevX.6.031015] proposed a weak-strong cluster model tailored to have tall and narrow energy barriers separating local minima, with the aim to highlight the value of finite-range tunneling. More precisely, results from quantum Monte Carlo simulations as well as the D-Wave 2X quantum annealer scale considerably better than state-of-the-art simulated annealing simulations. Moreover, the D-Wave 2X quantum annealer is ˜108 times faster than simulated annealing on conventional computer hardware for problems with approximately 103 variables. Here, an overview of different sequential, nontailored, as well as specialized tailored algorithms on the Google instances is given. We show that the quantum speedup is limited to sequential approaches and study the typical complexity of the benchmark problems using insights from the study of spin glasses. 16. Correlation of mupirocin resistance with biofilm production in methicillin-resistant Staphylococcus aureus from surgical site infections in a tertiary centre, Egypt. PubMed Barakat, Ghada I; Nabil, Yasmin M 2016-03-01 The aim of this study was to detect mupirocin-resistant isolates from pus/wound swabs taken postoperatively in a tertiary centre in Egypt and to determine their ability to form biofilm in order to establish its correlation with mupirocin resistance. This was a prospective study including 513pus/wound swabs from patients suffering from postoperative surgical site infections over the period July 2013-January 2015. Samples were cultured and isolates were identified by coagulase activity, DNase test, mannitol fermentation by mannitol salt agar followed by API Staph 32. Oxacillin agar screen test, agar dilution test for mupirocin, and mupA gene detection by PCR were performed for all methicillin-resistant Staphylococcus aureus (MRSA) isolates. Biofilm detection was carried out by the microtitre plate and Congo red agar methods. Of the 161 S. aureus isolates identified, 73 (45.3%) were MRSA, among which 82.2% were mupirocin-susceptible and 17.8% were mupirocin-resistant. Among the resistant isolates, 38.5% showed low-level resistance and 61.5% were high-level mupirocin-resistant. The mupA gene was detected in 75.0% of high-level mupirocin-resistant strains and in none of the low-level mupirocin-resistant strains. Among the mupirocin-susceptible isolates, 95.0% were biofilm-producers and 5.0% did not produce biofilm. All mupirocin-resistant isolates produced biofilm. Moreover, 15.3% of high-level mupirocin-resistant strains were negative for the mupA gene but showed evidence of biofilm formation. In conclusion, biofilm formation may be suggested to play a role in mupirocin resistance besides the presence of a genetic element encoding abnormal isoleucyl-tRNA synthetase, however further studies are needed to confirm these findings. PMID:27436387 17. Correlation of mupirocin resistance with biofilm production in methicillin-resistant Staphylococcus aureus from surgical site infections in a tertiary centre, Egypt. PubMed Barakat, Ghada I; Nabil, Yasmin M 2016-03-01 The aim of this study was to detect mupirocin-resistant isolates from pus/wound swabs taken postoperatively in a tertiary centre in Egypt and to determine their ability to form biofilm in order to establish its correlation with mupirocin resistance. This was a prospective study including 513pus/wound swabs from patients suffering from postoperative surgical site infections over the period July 2013-January 2015. Samples were cultured and isolates were identified by coagulase activity, DNase test, mannitol fermentation by mannitol salt agar followed by API Staph 32. Oxacillin agar screen test, agar dilution test for mupirocin, and mupA gene detection by PCR were performed for all methicillin-resistant Staphylococcus aureus (MRSA) isolates. Biofilm detection was carried out by the microtitre plate and Congo red agar methods. Of the 161 S. aureus isolates identified, 73 (45.3%) were MRSA, among which 82.2% were mupirocin-susceptible and 17.8% were mupirocin-resistant. Among the resistant isolates, 38.5% showed low-level resistance and 61.5% were high-level mupirocin-resistant. The mupA gene was detected in 75.0% of high-level mupirocin-resistant strains and in none of the low-level mupirocin-resistant strains. Among the mupirocin-susceptible isolates, 95.0% were biofilm-producers and 5.0% did not produce biofilm. All mupirocin-resistant isolates produced biofilm. Moreover, 15.3% of high-level mupirocin-resistant strains were negative for the mupA gene but showed evidence of biofilm formation. In conclusion, biofilm formation may be suggested to play a role in mupirocin resistance besides the presence of a genetic element encoding abnormal isoleucyl-tRNA synthetase, however further studies are needed to confirm these findings. 18. Gamow-Teller strength distributions and stellar weak-interaction rates for ^{76}Ge and ^{82}Se using the deformed pn-QRPA model NASA Astrophysics Data System (ADS) Nabi, Jameel-Un; Ishfaq, Mavra 2016-07-01 We calculate Gamow-Teller strength distributions for β β-decay nuclei ^{76}Ge and ^{82}Se using the deformed pn-QRPA model. We use a deformed Nilsson basis and consider pairing correlations within the deformed BCS theory. Ground state correlations and two-particle and two-hole mixing states were included in our pn-QRPA model. Our calculated strength distributions were compared with experimental data and previous calculation. The total Gamow-Teller strength and centroid placement calculated in our model compares well with the measured value. We calculate β-decay and positron capture rates on ^{76}Ge and ^{82}Se in supernovae environments and compare them to those obtained from experimental data and previous calculation. Our study shows that positron capture rates command the total weak rates at high stellar temperatures. We also calculate energy rates of β-delayed neutrons and their emission probabilities. 19. Structural and spectroscopic characterization and Hirshfeld surface analysis of major component of antibiotic mupirocin - pseudomonic acid A NASA Astrophysics Data System (ADS) Bojarska, J.; Maniukiewicz, W.; Fruziński, A.; Jędrzejczyk, M.; Wojciechowski, J.; Krawczyk, H. 2014-11-01 The crystal structure of pseudomonic acid A, the major component of antibiotic mupirocin, was determined from single-crystal X-ray diffraction data at low temperature (100 K). The compound crystallizes in the monoclinic system with non-centrosymmetric space group P21, with unit cell dimensions a = 12.4844(5), b = 5.0313(2), c = 21.5251(9) Å and β = 101.730(2)°, Z = 2. The molecules associate in dimers in head-to-tail motif through strong Osbnd H⋯O hydrogen bonds packed in the parallel arrangement along crystallographic axis b. Additionally, relatively weak Csbnd H⋯O and Csbnd H⋯π interactions form 3-D hydrogen bond framework. From the Hirshfeld surfaces and 2-D fingerprint analysis it was found that the subtle interactions, such as H⋯H, associating two-thirds of the all intercontacts, provide extra stabilization in addition to the presence of the mentioned above strong hydrogen bonds. The electrostatic potential mapped over the Hirshfeld surface visualizes electrostatic complementarities in the crystal packing. Results of X-ray diffraction and Monte Carlo methods reveal two conformations of n-alkyl chain of pseudomonic acid A, extended in the single-crystal and folded in the liquid state. A detailed interpretation of the FT-IR and NMR spectra were also reported. The TG and DTG results indicated that pseudomonic acid A is stable up to 210 °C. 20. Supraspinatus and infraspinatus weakness in overhead athletes with scapular dyskinesis: strength assessment before and after restoration of scapular musculature balance. PubMed Merolla, Giovanni; De Santis, Elisa; Campi, Fabrizio; Paladini, Paolo; Porcellini, Giuseppe 2010-12-01 A disturbance in scapulohumeral rhythm may cause negative biomechanic effects on rotator cuff (RC). Alteration in scapular motion and shoulder pain can influence RC strength. Purpose of this study was to assess supraspinatus and infraspinatus strength in 29 overhead athletes with scapular dyskinesis, before and after 3 and 6 months of rehabilitation aimed to restore scapular musculature balance. A passive posterior soft tissues stretching was prescribed to balance shoulder mobility. Scapular dyskinesis patterns were evaluated according to Kibler et al. Clinical assessment was performed with the empty can (EC) test and infraspinatus strength test (IST). Strength values were recorded by a dynamometer; scores for pain were assessed with VAS scale. Changes of shoulder IR were measured. The force values increased at 3 months (P < 0.01) and at 6 months (P < 0.01). Changes of glenohumeral IR and decrease in pain scores were found at both follow-up. Outcomes registered on pain and strength confirm the role of a proper scapular position for an optimal length-tension relationship of the RC muscles. These data should encourage those caring for athletes to consider restoring of scapular musculature balance as essential part of the athletic training. PMID:21069487 1. Molecular Characterization of High-Level Mupirocin Resistance in Staphylococcus pseudintermedius PubMed Central Pérez-Roth, Eduardo; Pintarić, Selma; Šeol Martinec, Branka 2013-01-01 The genetic analysis of high-level mupirocin resistance (Hi-Mupr) in a Staphylococcus pseudintermedius isolate from a dog is presented. The Hi-Mupr ileS2 gene flanked by a novel rearrangement of directly repeated insertion sequence IS257 elements was located, together with the aminoglycoside resistance aacA-aphD determinant, on a conjugative plasmid related to the pSK41/pGO1 family plasmids. PMID:23269741 2. Efficacy of mupirocin and rifampin used with standard treatment in the management of acne vulgaris. PubMed Khorvash, Farzin; Abdi, Fatemeh; H Kashani, Hessam; Fatemi Naeini, Farahnaz; Khorvash, Fariborz 2013-01-01 The multiple etiologic factors involved in acne make the use of various medications necessary to treat the condition. This study aimed to determine the efficacy of mupirocin and rifampin used with standard treatment in the management of acne vulgaris. In a multicentre, randomized controlled, triple-blinded study, a total of 105 acne patients, with a clinical diagnosis of moderate to severe acne,were randomizedly divided into three groups (35 per group), for treatment of acne. The first group was treated with standard treatment alone, the second group received mupirocin plus standard treatment and the third group received rifampin plus standard treatment.There were three study visits according to Global Acne Grading System (GAGS): at baseline and weeks 6 and 12. The absolute changes of GAGS score from baseline to week 6 and 12 demonstrated a reduction in the mean score of GAGS in the three treatment groups (p < 0.001). Due to the difference between GAGS score at the baseline of study, the data were adjusted using the general linear model. The findings showed that all of the treatments significantly improved acne lesions. Nevertheless, none of the treatments was shown to be more effective than the others (p = 0.9). The three treatments were well tolerated, and no serious adverse events were reported. These findings provide evidence on the efficacy of combining mupirocin and rifampin with standard treatment in the management of acne vulgaris, although none of the treatments had superior efficacy compared with the others. PMID:24250593 3. Comparison of mupirocin-based media for selective enumeration of bifidobacteria in probiotic supplements. PubMed Bunesova, Vera; Musilova, Sarka; Geigerova, Martina; Pechar, Radko; Rada, Vojtech 2015-02-01 An international standard already exists for the selective enumeration of bifidobacteria in milk products. This standard uses Transgalactosylated oligosaccharides (TOS) propionate agar supplemented with mupirocin. However, no such standard method has been described for the selective enumeration of bifidobacteria in probiotic supplements, where the presence of bifidobacteria is much more variable than in milk products. Therefore, we enumerated bifidobacteria by colony count technique in 13 probiotic supplements using three media supplemented with mupirocin (Mup; 100mg/l): TOS, Bifidobacteria selective medium (BSM) and modified Wilkins-Chalgren anaerobe agar with soya peptone (WSP). Moreover, the potential growth of bifidobacterial strains often used in probiotic products was performed in these media. All 13 products contained members of the genus Bifidobacterium, and tested mupirocin media were found to be fully selective for bifidobacteria. However, the type strain Bifidobacterium bifidum DSM 20456 and collection strain B. bifidum DSM 20239 showed statistically significant lower counts on TOS Mup media, compared to BSM Mup and WSP Mup media. Therefore, the TOS Mup medium recommended by the ISO standard cannot be regarded as a fully selective and suitable medium for the genus Bifidobacterium. In contrast, the BSM Mup and WSP Mup media supported the growth of all bifidobacterial species. 4. Efficacy of Mupirocin and Rifampin Used with Standard Treatment in the Management of Acne Vulgaris PubMed Central Khorvash, Farzin; Abdi, Fatemeh; H.Kashani, Hessam; Fatemi Naeini, Farahnaz; Khorvash, Fariborz 2013-01-01 The multiple etiologic factors involved in acne make the use of various medications necessary to treat the condition. This study aimed to determine the efficacy of mupirocin and rifampin used with standard treatment in the management of acne vulgaris. In a multicentre, randomized controlled, triple-blinded study, a total of 105 acne patients, with a clinical diagnosis of moderate to severe acne,were randomizedly divided into three groups (35 per group), for treatment of acne. The first group was treated with standard treatment alone, the second group received mupirocin plus standard treatment and the third group received rifampin plus standard treatment.There were three study visits according to Global Acne Grading System (GAGS): at baseline and weeks 6 and 12. The absolute changes of GAGS score from baseline to week 6 and 12 demonstrated a reduction in the mean score of GAGS in the three treatment groups (p < 0.001). Due to the difference between GAGS score at the baseline of study, the data were adjusted using the general linear model. The findings showed that all of the treatments significantly improved acne lesions. Nevertheless, none of the treatments was shown to be more effective than the others (p = 0.9). The three treatments were well tolerated, and no serious adverse events were reported. These findings provide evidence on the efficacy of combining mupirocin and rifampin with standard treatment in the management of acne vulgaris, although none of the treatments had superior efficacy compared with the others. PMID:24250593 5. Genotypic and phenotypic characterization of methicillin-resistant Staphylococcus aureus (MRSA) clones with high-level mupirocin resistance. PubMed González-Domínguez, María; Seral, Cristina; Potel, Carmen; Sáenz, Yolanda; Álvarez, Maximiliano; Torres, Carmen; Castillo, Francisco Javier 2016-06-01 A high proportion of methicillin-resistant Staphylococcus aureus isolates recovered in one year period showed high-level mupirocin-resistance (HLMUPR-MRSA) in our environment (27.2%). HLMUPR-MRSA isolates were mainly collected from skin and soft tissue samples, and diabetes was the main related comorbidity condition. These isolates were more frequently found in vascular surgery. HLMUPR-MRSA was more resistant to aminoglycosides than mupirocin-susceptible MRSA, linked to the presence of bifunctional and/or nucleotidyltransferase enzymes with/without macrolide resistance associated with the msr(A) gene. Most of HLMUPR-MRSA isolates belonged to ST125/t067. Nine IS257-ileS2 amplification patterns (p3 was the most frequent) were observed in HLMUPR-MRSA isolates, suggesting the presence of several mupirocin-resistance-carrying plasmids in our environment and promoting the emergence of mupirocin resistance. The presence of the same IS257-ileS2 amplification pattern p3 in 65% of HLMUPR-MRSA, all of them ST125/t067, suggests a clonal spread in our hospital and community environment which could explain the high prevalence of HLMUPR-MRSA during the study period. An outbreak situation or an increase in mupirocin consumption was not observed. 6. Analysis of rILERS, an isoleucyl-tRNA synthetase gene associated with mupirocin production by Pseudomonas fluorescens NCIMB 10586. PubMed Rangaswamy, Vidhya; Hernández-Guzmán, Gustavo; Shufran, Kevin A; Bender, Carol L 2002-12-01 Some strains of Pseudomonas fluorescens produce the antibiotic mupirocin, which functions as a competitive inhibitor of isoleucyl-tRNA synthetase (ILERS). Mupirocin-producing strains of P. fluorescens must overcome the inhibitory effects of the antibiotic to avoid self-suicide. However, it is not clear how P. fluorescens protects itself from the toxic effects of mupirocin. In this report, we describe a second gene encoding isoleucyl-tRNA synthetase (rILERS) in P. fluorescens that is associated with the mupirocin biosynthetic gene cluster. Random mutagenesis of the mupirocin-producing strain, P. fluorescens 10586, resulted in a mupirocin-defective mutant disrupted in a region with similarity to ILERS, the target site for mupirocin. The ILERS gene described in the present study was sequenced and shown to be encoded by a 3093 bp ORF, which is 264 bp larger than the ILERS gene previously identified in P. fluorescens 10586. rILERS from P. fluorescens is most closely related to prokaryotic or eukaryotic sources of ILERS that are resistant to mupirocin. Interestingly, the relatedness between rILERS and the ILERS previously described in P. fluorescens 10586 was low (24% similarity), which indicates that P. fluorescens contains two isoforms of isoleucyl-tRNA synthetase. 7. Strength and weaknesses of modeling the dynamics of mode-locked lasers by means of collective coordinates NASA Astrophysics Data System (ADS) Alsaleh, M.; Mback, C. B. L.; Tchomgo Felenou, E.; Tchofo Dinda, P.; Grelu, Ph; Porsezian, K. 2016-07-01 We address the efficiency of theoretical tools used in the development and optimization of mode-locked fiber lasers. Our discussion is based on the practical case of modeling the dynamics of a dispersion-managed fiber laser. One conventional approach uses discrete propagation equations, followed by the analysis of the numerical results through a collective coordinate projection. We compare the latter with our dynamical collective coordinate approach (DCCA), which combines both modeling and analysis in a compact form. We show that for single pulse dynamics, the DCCA allows a much quicker solution mapping in the space of cavity parameters than the conventional approach, along with a good accuracy. We also discuss the weaknesses of the DCCA, in particular when multiple pulsing bifurcations occur. 8. Time-resolved carrier dynamics and electron-phonon coupling strength in proximized weak ferromagnet-superconductor nanobilayers NASA Astrophysics Data System (ADS) Taneda, T.; Pepe, G. P.; Parlato, L.; Golubov, A. A.; Sobolewski, Roman 2007-05-01 We present our femtosecond optical pump-probe studies of proximized ferromagnet-superconductor nanobilayers. The weak ferromagnetic nature of a thin NiCu film makes it possible to observe the dynamics of the nonequilibrium carriers through the near-surface optical reflectivity change measurements. The subpicosecond biexponential reflectivity decay has been identified as electron-phonon Debye and acoustic phonon relaxation times, and the decay of Debye phonons versus temperature dependence was used to evaluate the electron-phonon coupling constants for both the pure Nb and proximized Nb/NiCu heterostructures down to low temperatures. We have also demonstrated that the NiCu overlay on top of Nb dramatically reduced the slow, bolometric component of the photoresponse component, making such bilayers attractive for future radiation detector applications. 9. SWOT analysis of Banff: strengths, weaknesses, opportunities and threats of the international Banff consensus process and classification system for renal allograft pathology. PubMed Mengel, M; Sis, B; Halloran, P F 2007-10-01 The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process. PMID:17848174 10. Studies of a weak polyampholyte at the air-buffer interface: The effect of varying pH and ionic strength NASA Astrophysics Data System (ADS) Cicuta, Pietro; Hopkinson, Ian 2001-05-01 We have carried out experiments to probe the static and dynamic interfacial properties of β-casein monolayers spread at the air-buffer interface, and analyzed these results in the context of models of weak polyampholytes. Measurements have been made systematically over a wide range of ionic strength and pH. In the semidilute regime of surface concentration a scaling exponent, which can be linked to the degree of chain swelling, is found. This shows that at pH close to the isoelectric point, the protein is compact. At pH away from the isoelectric pH the protein is extended. The transition between compact and extended states is continuous. As a function of increasing ionic strength, we observe swelling of the protein at the isoelectric pH but contraction of the protein at pH values away from it. These behaviors are typical of a those predicted theoretically for a weak polyampholyte. Dilational moduli measurements, made as a function of surface concentration exhibit maxima that are linked to the collapse of hydrophilic regions of the protein into the subphase. Based on this data we present a configuration map of the protein configuration in the monolayer. These findings are supported by strain (surface pressure) relaxation measurements and surface quasielastic light scattering measurements which suggest the existence of loops and tails in the subphase at higher surface concentrations. 11. SWOT analysis of Banff: strengths, weaknesses, opportunities and threats of the international Banff consensus process and classification system for renal allograft pathology. PubMed Mengel, M; Sis, B; Halloran, P F 2007-10-01 The Banff process defined the diagnostic histologic lesions for renal allograft rejection and created a standardized classification system where none had existed. By correcting this deficit the process had universal impact on clinical practice and clinical and basic research. All trials of new drugs since the early 1990s benefited, because the Banff classification of lesions permitted the end point of biopsy-proven rejection. The Banff process has strengths, weaknesses, opportunities and threats (SWOT). The strength is its self-organizing group structure to create consensus. Consensus does not mean correctness: defining consensus is essential if a widely held view is to be proved wrong. The weaknesses of the Banff process are the absence of an independent external standard to test the classification; and its almost exclusive reliance on histopathology, which has inherent limitations in intra- and interobserver reproducibility, particularly at the interface between borderline and rejection, is exactly where clinicians demand precision. The opportunity lies in the new technology such as transcriptomics, which can form an external standard and can be incorporated into a new classification combining the elegance of histopathology and the objectivity of transcriptomics. The threat is the degree to which the renal transplant community will participate in and support this process. 12. The Reliability and Validity of the English and Spanish Strengths and Weaknesses of ADHD and Normal Behavior Rating Scales in a Preschool Sample: Continuum Measures of Hyperactivity and Inattention ERIC Educational Resources Information Center Lakes, Kimberley D.; Swanson, James M.; Riggs, Matt 2012-01-01 Objective: To evaluate the reliability and validity of the English and Spanish versions of the Strengths and Weaknesses of ADHD-symptom and Normal-behavior (SWAN) rating scale. Method: Parents of preschoolers completed both a SWAN and the well-established Strengths and Difficulties Questionnaire (SDQ) on two separate occasions over a span of 3… 13. Multiplex PCR assay for identification of six different Staphylococcus spp. and simultaneous detection of methicillin and mupirocin resistance. PubMed Campos-Peña, E; Martín-Nuñez, E; Pulido-Reyes, G; Martín-Padrón, J; Caro-Carrillo, E; Donate-Correa, J; Lorenzo-Castrillejo, I; Alcoba-Flórez, J; Machín, F; Méndez-Alvarez, S 2014-07-01 We describe a new, efficient, sensitive, and fast single-tube multiple-PCR protocol for the identification of the most clinically significant Staphylococcus spp. and the simultaneous detection of the methicillin and mupirocin resistance loci. The protocol identifies at the species level isolates belonging to S. aureus, S. epidermidis, S. haemolyticus, S. hominis, S. lugdunensis, and S. saprophyticus. 14. Multiplex PCR assay for identification of six different Staphylococcus spp. and simultaneous detection of methicillin and mupirocin resistance. PubMed Campos-Peña, E; Martín-Nuñez, E; Pulido-Reyes, G; Martín-Padrón, J; Caro-Carrillo, E; Donate-Correa, J; Lorenzo-Castrillejo, I; Alcoba-Flórez, J; Machín, F; Méndez-Alvarez, S 2014-07-01 We describe a new, efficient, sensitive, and fast single-tube multiple-PCR protocol for the identification of the most clinically significant Staphylococcus spp. and the simultaneous detection of the methicillin and mupirocin resistance loci. The protocol identifies at the species level isolates belonging to S. aureus, S. epidermidis, S. haemolyticus, S. hominis, S. lugdunensis, and S. saprophyticus. PMID:24829244 15. Spatial Noise in Coupling Strength and Natural Frequency within a Pacemaker Network; Consequences for Development of Intestinal Motor Patterns According to a Weakly Coupled Phase Oscillator Model. PubMed Parsons, Sean P; Huizinga, Jan D 2016-01-01 Pacemaker activities generated by networks of interstitial cells of Cajal (ICC), in conjunction with the enteric nervous system, orchestrate most motor patterns in the gastrointestinal tract. It was our objective to understand the role of network features of ICC associated with the myenteric plexus (ICC-MP) in the shaping of motor patterns of the small intestine. To that end, a model of weakly coupled oscillators (oscillators influence each other's phase but not amplitude) was created with most parameters derived from experimental data. The ICC network is a uniform two dimensional network coupled by gap junctions. All ICC generate pacemaker (slow wave) activity with a frequency gradient in mice from 50/min at the proximal end of the intestine to 40/min at the distal end. Key features of motor patterns, directly related to the underlying pacemaker activity, are frequency steps and dislocations. These were accurately mimicked by reduction of coupling strength at a point in the chain of oscillators. When coupling strength was expressed as a product of gap junction density and conductance, and gap junction density was varied randomly along the chain (i.e., spatial noise) with a long-tailed distribution, plateau steps occurred at pointsof low density. As gap junction conductance was decreased, the number of plateaus increased, mimicking the effect of the gap junction inhibitor carbenoxolone. When spatial noise was added to the natural interval gradient, as gap junction conductance decreased, the number of plateaus increased as before but in addition the phase waves frequently changed direction of apparent propagation, again mimicking the effect of carbenoxolone. In summary, key features of the motor patterns that are governed by pacemaker activity may be a direct consequence of biological noise, specifically spatial noise in gap junction coupling and pacemaker frequency. PMID:26869875 16. Spatial Noise in Coupling Strength and Natural Frequency within a Pacemaker Network; Consequences for Development of Intestinal Motor Patterns According to a Weakly Coupled Phase Oscillator Model PubMed Central Parsons, Sean P.; Huizinga, Jan D. 2016-01-01 Pacemaker activities generated by networks of interstitial cells of Cajal (ICC), in conjunction with the enteric nervous system, orchestrate most motor patterns in the gastrointestinal tract. It was our objective to understand the role of network features of ICC associated with the myenteric plexus (ICC-MP) in the shaping of motor patterns of the small intestine. To that end, a model of weakly coupled oscillators (oscillators influence each other's phase but not amplitude) was created with most parameters derived from experimental data. The ICC network is a uniform two dimensional network coupled by gap junctions. All ICC generate pacemaker (slow wave) activity with a frequency gradient in mice from 50/min at the proximal end of the intestine to 40/min at the distal end. Key features of motor patterns, directly related to the underlying pacemaker activity, are frequency steps and dislocations. These were accurately mimicked by reduction of coupling strength at a point in the chain of oscillators. When coupling strength was expressed as a product of gap junction density and conductance, and gap junction density was varied randomly along the chain (i.e., spatial noise) with a long-tailed distribution, plateau steps occurred at pointsof low density. As gap junction conductance was decreased, the number of plateaus increased, mimicking the effect of the gap junction inhibitor carbenoxolone. When spatial noise was added to the natural interval gradient, as gap junction conductance decreased, the number of plateaus increased as before but in addition the phase waves frequently changed direction of apparent propagation, again mimicking the effect of carbenoxolone. In summary, key features of the motor patterns that are governed by pacemaker activity may be a direct consequence of biological noise, specifically spatial noise in gap junction coupling and pacemaker frequency. PMID:26869875 17. Predicted and measured concentrations of pharmaceuticals in hospital effluents. Examination of the strengths and weaknesses of the two approaches through the analysis of a case study. PubMed Verlicchi, Paola; Zambello, Elena 2016-09-15 This study deals with the chemical characterization of hospital effluents in terms of the predicted and measured concentrations of 38 pharmaceuticals belonging to 11 different therapeutic classes. The paper outlines the strengths and weaknesses of the two approaches through an analysis of a case study referring to a large hospital. It highlights the observed (and expected) ranges of variability for the parameters of the adopted model, presents the results of an uncertainty analysis of direct measurements (due to sampling mode and frequency and chemical analysis) and a sensitivity analysis of predicted concentrations (based on the annual consumption of pharmaceuticals, their excretion rate and annual wastewater volume generated by the hospital). Measured concentrations refer to two sampling campaigns carried out in summer and winter in order to investigate seasonal variability of the selected compounds. Predicted concentrations are compared to measured ones in the three scenarios: summer, winter and the whole year. It was found that predicted and measured concentrations are in agreement for a limited number of compounds (namely atenolol, atorvastatin and hydrochlorothiazide), and for most compounds the adoption of the model leads to a large overestimation in all three periods. Uncertainties in predictions are mainly due to the wastewater volume and excretion factor, whereas for measured concentrations, uncertainties are mainly due to sampling mode. PMID:27161130 18. Comparative study of mupirocin and oral co-trimoxazole plus topical fusidic acid in eradication of nasal carriage of methicillin-resistant Staphylococcus aureus. PubMed Central Parras, F; Guerrero, M C; Bouza, E; Blázquez, M J; Moreno, S; Menarguez, M C; Cercenado, E 1995-01-01 Mupirocin is a topically applied drug that is very active in the eradication of nasal carriage of methicillin-resistant Staphylococcus aureus (MRSA). However, studies designed to compare mupirocin treatment with other antimicrobial regimens are lacking. We therefore conducted an open, prospective, randomized, controlled trial to compare the efficacy and safety of mupirocin versus those of oral co-trimoxazole plus topical fusidic acid (both regimens with a clorhexidine scrub bath) for the eradication of MRSA from nasal and extranasal carriers of MRSA. The eradication rates with mupirocin and co-trimoxazole plus fusidic acid at 2, 7, 14, 21, 28, and 90 days were 93 and of 93, 100 and 100, 97 and 94, 100 and 92, 96 and 95, and 78 and 71%, respectively, for nasal carriage. At 7, 14, and 28 days the eradication rates for extranasal carriage by the two regimens were 23 and 74, 83 and 76, and 45 and 69%, respectively. The efficacies and safety of both regimens were similar. The MRSA isolates were not resistant to the study drugs either at the baseline or at follow-up. These results suggest that mupirocin and co-trimoxazole plus fusidic acid, both used in conjunction with a chlorhexidine soap bath, are equally effective and safe for the eradication of MRSA from nasal and extranasal MRSA carriers. Mupirocin was easier to use but was more expensive. PMID:7695302 19. Targeted Intranasal Mupirocin To Prevent Colonization and Infection by Community-Associated Methicillin-Resistant Staphylococcus aureus Strains in Soldiers: a Cluster Randomized Controlled Trial▿ PubMed Central Ellis, Michael W.; Griffith, Matthew E.; Dooley, David P.; McLean, Joseph C.; Jorgensen, James H.; Patterson, Jan E.; Davis, Kepler A.; Hawley, Joshua S.; Regules, Jason A.; Rivard, Robert G.; Gray, Paula J.; Ceremuga, Julia M.; DeJoseph, Mary A.; Hospenthal, Duane R. 2007-01-01 Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) is an emerging pathogen that primarily manifests as uncomplicated skin and soft tissue infections. We conducted a cluster randomized, double-blind, placebo-controlled trial to determine whether targeted intranasal mupirocin therapy in CA-MRSA-colonized soldiers could prevent infection in the treated individual and prevent new colonization and infection within their study groups. We screened 3,447 soldiers comprising 14 training classes for CA-MRSA colonization from January to December 2005. Each training class was randomized to either the mupirocin or placebo study group, and the participants identified as CA-MRSA colonized were treated with either mupirocin or placebo. All participants underwent repeat screening after 8 to 10 weeks and were monitored for 16 weeks for development of infection. Of 3,447 participants screened, 134 (3.9%) were initially colonized with CA-MRSA. Five of 65 (7.7%; 95% confidence interval [95% CI], 4.0% to 11.4%) placebo-treated participants and 7 of 66 (10.6%; 95% CI, 7.9% to 13.3%) mupirocin-treated participants developed infections; the difference in the infection rate of the placebo- and mupirocin-treated groups was −2.9% (95% CI, −7.5% to 1.7%). Of those not initially colonized with CA-MRSA, 63 of 1,459 (4.3%; 95% CI, 2.7% to 5.9%) of the placebo group and 56 of 1,607 (3.5%; 95% CI, 2.6% to 5.2%) of the mupirocin group developed infections; the difference in the infection rate of the placebo and mupirocin groups was 0.8% (95% CI, −1.0% to 2.7%). Of 3,447 participants, 3,066 (89%) were available for the second sampling and completed follow-up. New CA-MRSA colonization occurred in 24 of 1,459 (1.6%; 95% CI, 0.05% to 2.8%) of the placebo group participants and 23 of 1,607 (1.4%; 95% CI, 0.05% to 2.3%) of the mupirocin group participants; the difference in the infection rate of the placebo and mupirocin groups was 0.2% (95% CI, −1.3% to 1.7%). Despite CA 20. Quorum-sensing-dependent regulation of biosynthesis of the polyketide antibiotic mupirocin in Pseudomonas fluorescens NCIMB 10586. PubMed El-Sayed, A K; Hothersall, J; Thomas, C M 2001-08-01 Mupirocin (pseudomonic acid) is a polyketide antibiotic, targeting isoleucyl-tRNA synthase, and produced by Pseudomonas fluorescens NCIMB 10586. It is used clinically as a topical treatment for staphylococcal infections, particularly in contexts where there is a problem with methicillin-resistant Staphylococcus aureus (MRSA). In studying the mupirocin biosynthetic cluster the authors identified two putative regulatory genes, mupR and mupI, whose predicted amino acid sequences showed significant identity to proteins involved in quorum-sensing-dependent regulatory systems such as LasR/LuxR (transcriptional activators) and LasI/LuxI (synthases for N-acylhomoserine lactones--AHLs--that activate LasR/LuxR). Inactivation by deletion mutations using a suicide vector strategy confirmed the requirement for both genes in mupirocin biosynthesis. Cross-feeding experiments between bacterial strains as well as solvent extraction showed that, as predicted, wild-type P. fluorescens NCIMB 10586 produces a diffusible substance that overcomes the defect of a mupI mutant. Use of biosensor strains showed that the MupI product can activate the Pseudomonas aeruginosa lasRlasI system and that P. aeruginosa produces one or more compounds that can replace the MupI product. Insertion of a xylE reporter gene into mupA, the first ORF of the mupirocin biosynthetic operon, showed that together mupR/mupI control expression of the operon in such a way that the cluster is switched on late in exponential phase and in stationary phase. 1. A Novel Chimeric Lysin Shows Superiority to Mupirocin for Skin Decolonization of Methicillin-Resistant and -Sensitive Staphylococcus aureus Strains▿ PubMed Central Pastagia, Mina; Euler, Chad; Chahales, Peter; Fuentes-Duculan, Judilyn; Krueger, James G.; Fischetti, Vincent A. 2011-01-01 Staphylococcus aureus is a major human pathogen responsible for a number of serious and sometimes fatal infections. One of its reservoirs on the human body is the skin, which is known to be a source of invasive infection. The potential for an engineered staphylococcus-specific phage lysin (ClyS) to be used for topical decolonization is presented. We formulated ClyS into an ointment and applied it to a mouse model of skin colonization/infection with S. aureus. Unlike the standard topical antibacterial agent mupirocin, ClyS eradicated a significantly greater number of methicillin-susceptible S. aureus (MSSA) and -resistant S. aureus (MRSA) bacteria: a 3-log reduction with ClyS as opposed to a 2-log reduction with mupirocin in our model. The use of ClyS also demonstrated a decreased potential for the development of resistance by MRSA and MSSA organisms compared to that from the use of mupirocin in vitro. Because antibodies may affect enzyme function, we tested antibodies developed after repeated ClyS exposure for their effect on ClyS killing ability. Our results showed no inhibition of ClyS activity at various antibody titers. These data demonstrate the potential of developing ClyS as a novel class of topical antimicrobial agents specific to staphylococcus. PMID:21098252 2. Strengths and weaknesses of Global Positioning System (GPS) data-loggers and semi-structured interviews for capturing fine-scale human mobility: findings from Iquitos, Peru. PubMed Paz-Soldan, Valerie A; Reiner, Robert C; Morrison, Amy C; Stoddard, Steven T; Kitron, Uriel; Scott, Thomas W; Elder, John P; Halsey, Eric S; Kochel, Tadeusz J; Astete, Helvio; Vazquez-Prokopec, Gonzalo M 2014-06-01 Quantifying human mobility has significant consequences for studying physical activity, exposure to pathogens, and generating more realistic infectious disease models. Location-aware technologies such as Global Positioning System (GPS)-enabled devices are used increasingly as a gold standard for mobility research. The main goal of this observational study was to compare and contrast the information obtained through GPS and semi-structured interviews (SSI) to assess issues affecting data quality and, ultimately, our ability to measure fine-scale human mobility. A total of 160 individuals, ages 7 to 74, from Iquitos, Peru, were tracked using GPS data-loggers for 14 days and later interviewed using the SSI about places they visited while tracked. A total of 2,047 and 886 places were reported in the SSI and identified by GPS, respectively. Differences in the concordance between methods occurred by location type, distance threshold (within a given radius to be considered a match) selected, GPS data collection frequency (i.e., 30, 90 or 150 seconds) and number of GPS points near the SSI place considered to define a match. Both methods had perfect concordance identifying each participant's house, followed by 80-100% concordance for identifying schools and lodgings, and 50-80% concordance for residences and commercial and religious locations. As the distance threshold selected increased, the concordance between SSI and raw GPS data increased (beyond 20 meters most locations reached their maximum concordance). Processing raw GPS data using a signal-clustering algorithm decreased overall concordance to 14.3%. The most common causes of discordance as described by a sub-sample (n=101) with whom we followed-up were GPS units being accidentally off (30%), forgetting or purposely not taking the units when leaving home (24.8%), possible barriers to the signal (4.7%) and leaving units home to recharge (4.6%). We provide a quantitative assessment of the strengths and weaknesses of 3. Evaluation and intercomparison of downscaled daily precipitation indices over Japan in present-day climate: Strengths and weaknesses of dynamical and bias correction-type statistical downscaling methods NASA Astrophysics Data System (ADS) Iizumi, Toshichika; Nishimori, Motoki; Dairaku, Koji; Adachi, Sachiho A.; Yokozawa, Masayuki 2011-01-01 In this study, we evaluate the accuracy of four regional climate models (NHRCM, NRAMS, TRAMS, and TWRF) and one bias correction-type statistical model (CDFDM) for daily precipitation indices under the present-day climate (1985-2004) over Japan on a 20 km grid interval. The evaluated indices are (1) mean precipitation, (2) number of days with precipitation ≥1 mm/d (corresponds to number of wet days), (3) mean amount per wet day, (4) 90th percentile of daily precipitation, and (5) number of days with precipitation ≥90th percentile of daily precipitation. The boundary conditions of the dynamical models and the predictors of the statistical model are given from the single reanalysis data, i.e., JRA25. Both types of models successfully improved the accuracy of the indices relative to the reanalysis data in terms of bias, seasonal cycle, geographical pattern, cumulative distribution function of wet-day amount, and interannual variation pattern. In most aspects, NHRCM is the best model of all indices. Through the intercomparison between the dynamical and statistical models, respective strengths and weaknesses emerged. Briefly, (1) many dynamical models simulate too many wet days with a small amount of precipitation in humid climate zones, such as summer in Japan, relative to the statistical model, unless the cumulus convection scheme improved for such a condition is incorporated; (2) a few dynamical models can derive a better high-order percentile of daily precipitation (e.g., 90th percentile) than the statistical model; (3) both the dynamical and statistical models are still insufficient in the representation of the interannual variation pattern of the number of days with precipitation ≥90th percentile of daily precipitation; (4) the statistical model is comparable to the dynamical models in the long-term mean geographical pattern of the indices even on a 20 km grid interval if a dense observation network is applicable; (5) the statistical model is less accurate 4. Strengths and Weaknesses of Global Positioning System (GPS) Data-Loggers and Semi-structured Interviews for Capturing Fine-scale Human Mobility: Findings from Iquitos, Peru PubMed Central Paz-Soldan, Valerie A.; Reiner, Robert C.; Morrison, Amy C.; Stoddard, Steven T.; Kitron, Uriel; Scott, Thomas W.; Elder, John P.; Halsey, Eric S.; Kochel, Tadeusz J.; Astete, Helvio; Vazquez-Prokopec, Gonzalo M. 2014-01-01 Quantifying human mobility has significant consequences for studying physical activity, exposure to pathogens, and generating more realistic infectious disease models. Location-aware technologies such as Global Positioning System (GPS)-enabled devices are used increasingly as a gold standard for mobility research. The main goal of this observational study was to compare and contrast the information obtained through GPS and semi-structured interviews (SSI) to assess issues affecting data quality and, ultimately, our ability to measure fine-scale human mobility. A total of 160 individuals, ages 7 to 74, from Iquitos, Peru, were tracked using GPS data-loggers for 14 days and later interviewed using the SSI about places they visited while tracked. A total of 2,047 and 886 places were reported in the SSI and identified by GPS, respectively. Differences in the concordance between methods occurred by location type, distance threshold (within a given radius to be considered a match) selected, GPS data collection frequency (i.e., 30, 90 or 150 seconds) and number of GPS points near the SSI place considered to define a match. Both methods had perfect concordance identifying each participant's house, followed by 80–100% concordance for identifying schools and lodgings, and 50–80% concordance for residences and commercial and religious locations. As the distance threshold selected increased, the concordance between SSI and raw GPS data increased (beyond 20 meters most locations reached their maximum concordance). Processing raw GPS data using a signal-clustering algorithm decreased overall concordance to 14.3%. The most common causes of discordance as described by a sub-sample (n = 101) with whom we followed-up were GPS units being accidentally off (30%), forgetting or purposely not taking the units when leaving home (24.8%), possible barriers to the signal (4.7%) and leaving units home to recharge (4.6%). We provide a quantitative assessment of the strengths and 5. Parallel inhibition of active force and relaxed fiber stiffness by caldesmon fragments at physiological ionic strength and temperature conditions: additional evidence that weak cross-bridge binding to actin is an essential intermediate for force generation. PubMed Central Kraft, T; Chalovich, J M; Yu, L C; Brenner, B 1995-01-01 Previously we showed that stiffness of relaxed fibers and active force generated in single skinned fibers of rabbit psoas muscle are inhibited in parallel by actin-binding fragments of caldesmon, an actin-associated protein of smooth muscle, under conditions in which a large fraction of cross-bridges is weakly attached to actin (ionic strength of 50 mM and temperature of 5 degrees C). These results suggested that weak cross-bridge attachment to actin is essential for force generation. The present study provides evidence that this is also true for physiological ionic strength (170 mM) at temperatures up to 30 degrees C, suggesting that weak cross-bridge binding to actin is generally required for force generation. In addition, we show that the inhibition of active force is not a result of changes in cross-bridge cycling kinetics but apparently results from selective inhibition of weak cross-bridge binding to actin. Together with our previous biochemical, mechanical, and structural studies, these findings support the proposal that weak cross-bridge attachment to actin is an essential intermediate on the path to force generation and are consistent with the concept that isometric force mainly results from an increase in strain of the attached cross-bridge as a result of a structural change associated with the transition from a weakly bound to a strongly bound actomyosin complex. This mechanism is different from the processes responsible for quick tension recovery that were proposed by Huxley and Simmons (Proposed mechanism of force generation in striated muscle. Nature. 233:533-538.) to represent the elementary mechanism of force generation. Images FIGURE 1 PMID:7647245 6. From Weakness to Strength: C-H/π-Interaction-Guided Self-Assembly and Gelation of Poly(benzyl ether) Dendrimers. PubMed Peng, Yi; Feng, Yu; Deng, Guo-Jun; He, Yan-Mei; Fan, Qing-Hua 2016-09-13 The C-H/π interactions as the key driving force for the construction of supramolecular gels remain a great challenge because of their weak nature. We hereby employed for the first time weak C-H/π interactions for the construction of supramolecular dendritic gels based on peripherally methyl-functionalized poly(benzyl ether) dendrimers. Their gelation property is highly dependent on the nature of the peripheral methyl groups. Furthermore, single-crystal X-ray analysis and NMR spectroscopy revealed that multiple C-H/π interactions between the proton of the methyl group and the electron-rich peripheral methyl-substituted aryl ring played significant roles in the formation of supramolecular nanofibers and organogels. This study uncovers the critical role of weak noncovalent interactions and provides new insights into the further design of self-assembled nanomaterials. 7. From Weakness to Strength: C-H/π-Interaction-Guided Self-Assembly and Gelation of Poly(benzyl ether) Dendrimers. PubMed Peng, Yi; Feng, Yu; Deng, Guo-Jun; He, Yan-Mei; Fan, Qing-Hua 2016-09-13 The C-H/π interactions as the key driving force for the construction of supramolecular gels remain a great challenge because of their weak nature. We hereby employed for the first time weak C-H/π interactions for the construction of supramolecular dendritic gels based on peripherally methyl-functionalized poly(benzyl ether) dendrimers. Their gelation property is highly dependent on the nature of the peripheral methyl groups. Furthermore, single-crystal X-ray analysis and NMR spectroscopy revealed that multiple C-H/π interactions between the proton of the methyl group and the electron-rich peripheral methyl-substituted aryl ring played significant roles in the formation of supramolecular nanofibers and organogels. This study uncovers the critical role of weak noncovalent interactions and provides new insights into the further design of self-assembled nanomaterials. PMID:27538342 8. Randomized, controlled trial of topical exit-site application of honey (Medihoney) versus mupirocin for the prevention of catheter-associated infections in hemodialysis patients. PubMed Johnson, David Wayne; van Eps, Carolyn; Mudge, David William; Wiggins, Kathryn Joan; Armstrong, Kirsty; Hawley, Carmel Mary; Campbell, Scott Bryan; Isbel, Nicole Maree; Nimmo, Graeme Robert; Gibbs, Harry 2005-05-01 The clinical usefulness of hemodialysis catheters is limited by increased infectious morbidity and mortality. Topical antiseptic agents, such as mupirocin, are effective at reducing this risk but have been reported to select for antibiotic-resistant strains. The aim of the present study was to determine the efficacy and the safety of exit-site application of a standardized antibacterial honey versus mupirocin in preventing catheter-associated infections. A randomized, controlled trial was performed comparing the effect of thrice-weekly exit-site application of Medihoney versus mupirocin on infection rates in patients who were receiving hemodialysis via tunneled, cuffed central venous catheters. A total of 101 patients were enrolled. The incidences of catheter-associated bacteremias in honey-treated (n = 51) and mupirocin-treated (n = 50) patients were comparable (0.97 versus 0.85 episodes per 1000 catheter-days, respectively; NS). On Cox proportional hazards model analysis, the use of honey was not significantly associated with bacteremia-free survival (unadjusted hazard ratio, 0.94; 95% confidence interval, 0.27 to 3.24; P = 0.92). No exit-site infections occurred. During the study period, 2% of staphylococcal isolates within the hospital were mupirocin resistant. Thrice-weekly application of standardized antibacterial honey to hemodialysis catheter exit sites was safe, cheap, and effective and resulted in a comparable rate of catheter-associated infection to that obtained with mupirocin (although the study was not adequately powered to assess therapeutic equivalence). The effectiveness of honey against antibiotic-resistant microorganisms and its low likelihood of selecting for further resistant strains suggest that this agent may represent a satisfactory alternative means of chemoprophylaxis in patients with central venous catheters. 9. Topical Bactroban (mupirocin): efficacy in treating burn wounds infected with methicillin-resistant staphylococci. PubMed Strock, L L; Lee, M M; Rutan, R L; Desai, M H; Robson, M C; Herndon, D N; Heggers, J P 1990-01-01 Bacterial antimicrobial susceptibility predictors such as the minimal inhibitory concentration (MIC) assay and Nathans Agar Well Diffusion (NAWD) assay provide essential information relevant to the therapeutic approach in burn-wound sepsis. The susceptibilities of 68 gram-positive burn-wound isolates were tested against topical Bactroban (mupirocin) (Beecham Laboratories, Bristol, Tenn.) and compared with other topical antimicrobials such as mafenide acetate, silver sulfadiazine, and bacitracin/neomycin/polymyxin (BNP). Topical susceptibility data were obtained with a modification of NAWD assay. Bactroban's antimicrobial activity was greater than that of mafenide acetate (100% vs 97%), and significantly greater than that of silver sulfadiazine and that of BNP (p less than 0.001). Of the 68 isolates that were susceptible to Bactroban, 51 were predominately methicillin-resistant staphylococci (MRSA). Bactroban showed in vitro activity against 71% of the 85 gram-negative isolates tested. Mafenide acetate showed activity against 89% of these isolates, a significant difference compared with Bactroban (p less than 0.02). In general, no significant difference was found between the activities of Bactroban and silver sulfadiazine against the gram-negative isolates. The activities of mafenide acetate and silver sulfadiazine against isolates of Pseudomonas aeruginosa were significantly greater than that of Bactroban (p less than 0.05). Bactroban may be used in the treatment of documented staphylococcal burn-wound infections. On the basis of the in vitro data, 13 patients with MRSA burn-wound infections susceptible to Bactroban were evaluated. Quantitative wound biopsies were employed to determine the efficacy of this therapeutic approach. The outcome of these infections was correctly predicted by the NAWD assay in 92.3% of the patients treated (p less than 0.0005).(ABSTRACT TRUNCATED AT 250 WORDS) 10. Liposomes-in-hydrogel delivery system with mupirocin: in vitro antibiofilm studies and in vivo evaluation in mice burn model. PubMed Hurler, Julia; Sørensen, Karen K; Fallarero, Adyary; Vuorela, Pia; Škalko-Basnet, Nataša 2013-01-01 Previously, we have proposed mupirocin-in-liposomes-in-hydrogel delivery system as advanced delivery system with the potential in treatment of burns. In the current studies, we evaluated the system for its cytotoxicity, ability to prevent biofilm formation, act on the mature biofilms, and finally determined its potential as wound treatment in in vivo mice burn model. The system was found to be nontoxic against HaCaT cells, that is, keratinocytes. It was safe for use and exhibited antibiofilm activity against S. aureus biofilms, although the activity was more significant against planktonic bacteria and prior to biofilm formation than against mature biofilms as shown in the resazurin and the crystal violet assays. An in vivo mice burn model was used to evaluate the biological potential of the system and the healing of burns observed over 28 days. The in vivo data suggest that the delivery system enhances wound healing and is equally potent as the marketed product of mupirocin. Histological examination showed no difference in the quality of the healed scar tissue, whereas the healing time for the new delivery system was shorter as compared to the marketed product. Further animal studies and development of more sophisticated in vivo model are needed for complete evaluation. PMID:24369533 11. Liposomes-in-hydrogel delivery system with mupirocin: in vitro antibiofilm studies and in vivo evaluation in mice burn model. PubMed Hurler, Julia; Sørensen, Karen K; Fallarero, Adyary; Vuorela, Pia; Škalko-Basnet, Nataša 2013-01-01 Previously, we have proposed mupirocin-in-liposomes-in-hydrogel delivery system as advanced delivery system with the potential in treatment of burns. In the current studies, we evaluated the system for its cytotoxicity, ability to prevent biofilm formation, act on the mature biofilms, and finally determined its potential as wound treatment in in vivo mice burn model. The system was found to be nontoxic against HaCaT cells, that is, keratinocytes. It was safe for use and exhibited antibiofilm activity against S. aureus biofilms, although the activity was more significant against planktonic bacteria and prior to biofilm formation than against mature biofilms as shown in the resazurin and the crystal violet assays. An in vivo mice burn model was used to evaluate the biological potential of the system and the healing of burns observed over 28 days. The in vivo data suggest that the delivery system enhances wound healing and is equally potent as the marketed product of mupirocin. Histological examination showed no difference in the quality of the healed scar tissue, whereas the healing time for the new delivery system was shorter as compared to the marketed product. Further animal studies and development of more sophisticated in vivo model are needed for complete evaluation. 12. Liposomes-in-Hydrogel Delivery System with Mupirocin: In Vitro Antibiofilm Studies and In Vivo Evaluation in Mice Burn Model PubMed Central Hurler, Julia; Sørensen, Karen K.; Vuorela, Pia; Škalko-Basnet, Nataša 2013-01-01 Previously, we have proposed mupirocin-in-liposomes-in-hydrogel delivery system as advanced delivery system with the potential in treatment of burns. In the current studies, we evaluated the system for its cytotoxicity, ability to prevent biofilm formation, act on the mature biofilms, and finally determined its potential as wound treatment in in vivo mice burn model. The system was found to be nontoxic against HaCaT cells, that is, keratinocytes. It was safe for use and exhibited antibiofilm activity against S. aureus biofilms, although the activity was more significant against planktonic bacteria and prior to biofilm formation than against mature biofilms as shown in the resazurin and the crystal violet assays. An in vivo mice burn model was used to evaluate the biological potential of the system and the healing of burns observed over 28 days. The in vivo data suggest that the delivery system enhances wound healing and is equally potent as the marketed product of mupirocin. Histological examination showed no difference in the quality of the healed scar tissue, whereas the healing time for the new delivery system was shorter as compared to the marketed product. Further animal studies and development of more sophisticated in vivo model are needed for complete evaluation. PMID:24369533 13. Effects of sub-lethal concentrations of mupirocin on global transcription in Staphylococcus aureus 8325-4 and a model for the escape from inhibition. PubMed AlHoufie, Sari Talal S; Foster, Howard A 2016-08-01 Staphylococcus aureus is a major pathogen in both hospital and community settings, causing infections ranging from mild skin and wound infections to life-threatening systemic illness. Gene expression changes due to the stringent response have been studied in S. aureus using lethal concentrations of mupirocin, but no studies have investigated the effects of sub-lethal concentrations. S. aureus 8325-4 was exposed to sub-inhibitory concentrations of mupirocin. The production of ppGpp was assessed via HPLC and the effects on global transcription were studied by RNAseq (RNA sequencing) analysis. Growth inhibition had occurred after 1 h of treatment and metabolic analysis revealed that the stringent response alarmone ppGpp was present and GTP concentrations decreased. Transcriptome profiles showed that global transcriptional alterations were similar to those for S. aureus after treatment with lethal concentrations of mupirocin, including the repression of genes involved in transcription, translation and replication machineries. Furthermore, up-regulation of genes involved in stress responses, and amino acid biosynthesis and transport, as well as some virulence factor genes, was observed. However, ppGpp was not detectable after 12 or 24 h and cell growth had resumed, although some transcriptional changes remained. Sub-lethal concentrations of mupirocin induce the stringent response, but cells adapt and resume growth once ppGpp levels decrease. 14. Evaluation of hospital palliative care teams: strengths and weaknesses of the before-after study design and strategies to improve it. PubMed Simon, S; Higginson, I J 2009-01-01 Hospital palliative care teams (HPCTs) are well established as multi-professional services to provide palliative care in an acute hospital setting and are increasing in number. However, there is still limited evaluation of them, in terms of efficacy and effectiveness. The gold standard method of evaluation is a randomised control trial, but because of methodological (e.g., randomisation), ethical and practical difficulties such trials are often not possible. HPCT is a complex intervention, and the specific situation in palliative care makes it challenging to evaluate (e.g., distress and cognitive impairment of patients). The quasi-experimental before-after study design has the advantage of enabling an experimental character without randomisation. But this has other weaknesses and is prone to bias, for example, temporal trends and selection bias. As for every study design, avoidance and minimisation of bias is important to improve validity. Therefore, strategies of selecting an appropriate control group or time series and applying valid outcomes and measurement tools help reducing bias and strengthen the methods. Special attention is needed to plan and define the design and applied method. 15. Strengths and weaknesses in the supply of school food resulting from the procurement of family farm produce in a municipality in Brazil. PubMed Soares, Panmela; Martinelli, Suellen Secchi; Melgarejo, Leonardo; Davó-Blanes, Mari Carmen; Cavalli, Suzi Barletto 2015-06-01 The objective of this study was to assess compliance with school food programme recommendations for the procurement of family farm produce. This study consists of an exploratory descriptive study utilising a qualitative approach based on semistructured interviews with key informants in a municipality in the State of Santa Catarina in Brazil. Study participants were managers and staff of the school food programme and department of agriculture, and representatives of a farmers' organisation. The produce delivery and demand fulfilment stages of the procurement process were carried out in accordance with the recommendations. However, nonconformities occurred in the elaboration of the public call for proposals, elaboration of the sales proposal, and fulfilment of produce quality standards. It was observed that having a diverse range of suppliers and the exchange of produce by the cooperative with neighbouring municipalities helped to maintain a regular supply of produce. The elaboration of menus contributed to planning agricultural production. However, agricultural production was not mapped before elaborating the menus in this case study and an agricultural reform settlement was left out of the programme. A number of weaknesses in the programme were identified which need to be overcome in order to promote local family farming and improve the quality of school food in the municipality. 16. Strength from weakness: conformational divergence between solid and solution states of substituted cyclitols facilitated by CH···O hydrogen bonding. PubMed Vibhute, Amol M; Sureshan, Kana M 2014-06-01 We have investigated the conformational preferences of a series of cyclitol derivatives, namely mono- and diesters of 1,2:5,6-di-O-isopropylidene-myo-inositol and 1,2:5,6-di-O-cyclohexylidene-myo-inositol, in both solid and solution states. The solid-state conformations were determined by single-crystal X-ray analysis. The solution-state conformations were determined by using NMR. The experimental (3)J(HH) values were applied in the Haasnoot-Altona equation to calculate the dihedral angle (ϕ) between the respective vicinal protons. By fixing the dihedral angle between different sets of vicinal protons, the molecules were energy-minimized by MM2 method to visualize their conformation in solution. As the solvent polarities can influence the conformational preference, we have determined the conformations of these molecules in various solvents of different polarities such as benzene-d6, chloroform-d, acetonitrile-d3, acetone-d6, methanol-d4, and DMSO-d6. All of the compounds adopted boat conformations in solution irrespective of the solvents, acyl groups, or alkylidene protecting groups. This conformation places H6 and O3 of the cyclitol ring in proximity, such that an intramolecular CH···O hydrogen bond between them stabilizes this otherwise unstable conformation. However, in the solid state, several intermolecular CH···O hydrogen bonds force these molecules to adopt the chair conformation. This study uncovers the role of weak noncovalent interactions in influencing the molecular conformations differentially in different states. 17. Characterizing the Long-Term PM2.5 Concentration-Response Function: Comparing the Strengths and Weaknesses of Research Synthesis Approaches. PubMed Fann, Neal; Gilmore, Elisabeth A; Walker, Katherine 2016-09-01 The magnitude, shape, and degree of certainty in the association between long-term population exposure to ambient fine particulate matter (PM2.5 ) and the risk of premature death is one of the most intensely studied issues in environmental health. For regulatory risk analysis, this relationship is described quantitatively by a concentration-response (C-R) function that relates exposure to ambient concentrations with the risk of premature mortality. Four data synthesis techniques develop the basis for, and derive, this function: systematic review, expert judgment elicitation, quantitative meta-analysis, and integrated exposure-response (IER) assessment. As part of an academic workshop aiming to guide the use of research synthesis approaches, we developed criteria with which to evaluate and select among the approaches for their ability to inform policy choices. These criteria include the quality and extent of scientific support for the method, its transparency and verifiability, its suitability to the policy problem, and the time and resources required for its application. We find that these research methods are both complementary and interdependent. A systematic review of the multidisciplinary evidence is a starting point for all methods, providing the broad conceptual basis for the nature, plausibility, and strength of the associations between PM exposure and adverse health effects. Further, for a data-rich application like PM2.5 and premature mortality, all three quantitative approaches can produce estimates that are suitable for regulatory and benefit analysis. However, when fewer data are available, more resource-intensive approaches such as expert elicitation may be more important for understanding what scientists know, where they agree or disagree, and what they believe to be the most important areas of uncertainty. Whether implicitly or explicitly, all require considerable judgment by scientists. Finding ways for all these methods to acknowledge 18. Effect of the pH and the ionic strength on overloaded band profiles of weak bases onto neutral and charged surface hybrid stationary phases in reversed-phase liquid chromatography. PubMed Gritti, Fabrice; Guiochon, Georges 2013-03-22 This work reports on the effects of the solution pH and its ionic strength on the overloaded band profiles and the parameters of the adsorption isotherms of nortriptylinium hydrochloride on the bridge ethylene hybrid (BEH) and the charged surface hybrid (CSH) C18-bonded columns. The mobile phases used were mixtures of acetonitrile and water buffered with hydrogenophosphate, formate, acetate, and dihydrogenophosphate buffers. The results show that the adsorption behavior of this protonated base onto the BEH-C18 column depends barely on the mobile phase pH and is slightly affected by its ionic strength. From both physical and statistical viewpoints, the Linear-Langmuir model is the most relevant adsorption isotherm. According to the inverse method of chromatography, this model is consistent with weak dispersive interactions taking place onto the C18-bonded chains and some strong ion-dipole interactions with residual silanols. In contrast, adsorption on the CSH-C18 column depends on the applied W(S)pH, e.g., on the degree of ionization of the amine groups tethered to the CSH surface. For W(S)pH<3, the electrostatically modified Langmuir model (EML) is acceptable because analyte molecules cannot access and interact with any active sites due to the electrostatic repulsion by the positively charged adsorbent surface. At W(S)pH>7.0, the Linear-bi-Langmuir model describes best the weak adsorption of the protonated base molecules onto the C18 chains and their strong adsorption onto the residual silanols and neutral amine groups. PMID:23415137 19. Weak bond screening system NASA Astrophysics Data System (ADS) Chuang, S. Y.; Chang, F. H.; Bell, J. R. Consideration is given to the development of a weak bond screening system which is based on the utilization of a high power ultrasonic (HPU) technique. The instrumentation of the prototype bond strength screening system is described, and the adhesively bonded specimens used in the system developmental effort are detailed. Test results obtained from these specimens are presented in terms of bond strength and level of high power ultrasound irradiation. The following observations were made: (1) for Al/Al specimens, 2.6 sec of HPU irradiation will screen weak bond conditions due to improper preparation of bonding surfaces; (2) for composite/composite specimens, 2.0 sec of HPU irradiation will disrupt weak bonds due to under-cured conditions; (3) for Al honeycomb core with composite skin structure, 3.5 sec of HPU irradiation will disrupt weak bonds due to bad adhesive or oils contamination of bonding surfaces; and (4) for Nomex honeycomb with Al skin structure, 1.3 sec of HPU irradiation will disrupt weak bonds due to bad adhesive. 20. The interRAI Acute Care instrument incorporated in an eHealth system for standardized and web-based geriatric assessment: strengths, weaknesses, opportunities and threats in the acute hospital setting PubMed Central 2013-01-01 Background The interRAI Acute Care instrument is a multidimensional geriatric assessment system intended to determine a hospitalized older persons’ medical, psychosocial and functional capacity and needs. Its objective is to develop an overall plan for treatment and long-term follow-up based on a common set of standardized items that can be used in various care settings. A Belgian web-based software system (BelRAI-software) was developed to enable clinicians to interpret the output and to communicate the patients’ data across wards and care organizations. The purpose of the study is to evaluate the (dis)advantages of the implementation of the interRAI Acute Care instrument as a comprehensive geriatric assessment instrument in an acute hospital context. Methods In a cross-sectional multicenter study on four geriatric wards in three acute hospitals, trained clinical staff (nurses, occupational therapists, social workers, and geriatricians) assessed 410 inpatients in routine clinical practice. The BelRAI-system was evaluated by focus groups, observations, and questionnaires. The Strengths, Weaknesses, Opportunities and Threats were mapped (SWOT-analysis) and validated by the participants. Results The primary strengths of the BelRAI-system were a structured overview of the patients’ condition early after admission and the promotion of multidisciplinary assessment. Our study was a first attempt to transfer standardized data between home care organizations, nursing homes and hospitals and a way to centralize medical, allied health professionals and nursing data. With the BelRAI-software, privacy of data is guaranteed. Weaknesses are the time-consuming character of the process and the overlap with other assessment instruments or (electronic) registration forms. There is room for improving the user-friendliness and the efficiency of the software, which needs hospital-specific adaptations. Opportunities are a timely and systematic problem detection and continuity of 1. Mutational analysis reveals that all tailoring region genes are required for production of polyketide antibiotic mupirocin by pseudomonas fluorescens: pseudomonic acid B biosynthesis precedes pseudomonic acid A. PubMed Hothersall, Joanne; Wu, Ji'en; Rahman, Ayesha S; Shields, Jennifer A; Haddock, James; Johnson, Nicola; Cooper, Sian M; Stephens, Elton R; Cox, Russell J; Crosby, John; Willis, Christine L; Simpson, Thomas J; Thomas, Christopher M 2007-05-25 The Pseudomonas fluorescens mupirocin biosynthetic cluster encodes six proteins involved in polyketide biosynthesis and 26 single polypeptides proposed to perform largely tailoring functions. In-frame deletions in the tailoring open reading frames demonstrated that all are required for mupirocin production. A bidirectional promoter region was identified between mupF, which runs counter to other open reading frames and its immediate neighbor macpC, implying the 74-kb cluster consists of two transcriptional units. mupD/E and mupJ/K must be cotranscribed as pairs for normal function implying co-assembly during translation. MupJ and K belong to a widely distributed enzyme pair implicated, with MupH, in methyl addition. Deletion of mupF, a putative ketoreductase, produced a mupirocin analogue with a C-7 ketone. Deletion of mupC, a putative dienoyl CoA reductase, generated an analogue whose structure indicated that MupC is also implicated in control of the oxidation state around the tetrahydropyran ring of monic acid. Double mutants with DeltamupC and DeltamupO, DeltamupU, DeltamupV, or DeltamacpE produced pseudomonic acid B but not pseudomonic acid A, as do the mupO, U, V, and macpE mutants, indicating that MupC must work after MupO, U, and V. 2. The Zirconia Ceramic: Strengths and Weaknesses PubMed Central Daou, Elie E. 2014-01-01 Metal ceramic restorations were considered the gold standard as reliable materials. Increasing demand for esthetics supported the commercialization of new metal free restorations. A growing demand is rising for zirconia prostheses. Peer-reviewed articles published till July 2013 were identified through a Medline (Pubmed and Elsevier). Emphasizing was made on zirconia properties and applications. Zirconia materials are able to withstand posterior physiologic loads. Although zirconia cores are considered as reliable materials, these restorations are not problem free. PMID:24851138 3. Weak Interactions DOE R&D Accomplishments Database Lee, T. D. 1957-06-01 Experimental results on the non-conservation of parity and charge conservation in weak interactions are reviewed. The two-component theory of the neutrino is discussed. Lepton reactions are examined under the assumption of the law of conservation of leptons and that the neutrino is described by a two- component theory. From the results of this examination, the universal Fermi interactions are analyzed. Although reactions involving the neutrino can be described, the same is not true of reactions which do not involve the lepton, as the discussion of the decay of K mesons and hyperons shows. The question of the invariance of time reversal is next examined. (J.S.R.) 4. Quantum discord with weak measurements SciTech Connect Singh, Uttam Pati, Arun Kumar 2014-04-15 Weak measurements cause small change to quantum states, thereby opening up the possibility of new ways of manipulating and controlling quantum systems. We ask, can weak measurements reveal more quantum correlation in a composite quantum state? We prove that the weak measurement induced quantum discord, called as the “super quantum discord”, is always larger than the quantum discord captured by the strong measurement. Moreover, we prove the monotonicity of the super quantum discord as a function of the measurement strength and in the limit of strong projective measurement the super quantum discord becomes the normal quantum discord. We find that unlike the normal discord, for pure entangled states, the super quantum discord can exceed the quantum entanglement. Our results provide new insights on the nature of quantum correlation and suggest that the notion of quantum correlation is not only observer dependent but also depends on how weakly one perturbs the composite system. We illustrate the key results for pure as well as mixed entangled states. -- Highlights: •Introduced the role of weak measurements in quantifying quantum correlation. •We have introduced the notion of the super quantum discord (SQD). •For pure entangled state, we show that the SQD exceeds the entanglement entropy. •This shows that quantum correlation depends not only on observer but also on measurement strength. 5. Spin effects in the weak interaction SciTech Connect Freedman, S.J. Chicago Univ., IL . Dept. of Physics Chicago Univ., IL . Enrico Fermi Inst.) 1990-01-01 Modern experiments investigating the beta decay of the neutron and light nuclei are still providing important constraints on the theory of the weak interaction. Beta decay experiments are yielding more precise values for allowed and induced weak coupling constants and putting constraints on possible extensions to the standard electroweak model. Here we emphasize the implications of recent experiments to pin down the strengths of the weak vector and axial vector couplings of the nucleon. 6. Weak scale supersymmetry SciTech Connect Hall, L.J. California Univ., Berkeley, CA . Dept. of Physics) 1990-11-12 An introduction to the ideas and current state of weak scale supersymmetry is given. It is shown that LEP data on Z decays has already excluded two of the most elegant models of weak scale supersymmetry. 14 refs. 7. Frequency of biocide-resistant genes and susceptibility to chlorhexidine in high-level mupirocin-resistant, methicillin-resistant Staphylococcus aureus (MuH MRSA). PubMed Liu, Qingzhong; Zhao, Huanqiang; Han, Lizhong; Shu, Wen; Wu, Qiong; Ni, Yuxing 2015-08-01 The aim of this study was to determine the prevalence of biocide-resistant determinants and the susceptibility to chlorhexidine in high-level mupirocin-resistant, methicillin-resistant Staphylococcus aureus (MuH MRSA). Fifty-three MuH MRSA isolates were analyzed for plasmid-borne genes (qacA/B, smr, qacG, qacH, and qacJ) by polymerase chain reaction (PCR); for chromosome-mediated genes (norA, norB, norC, mepA, mdeA, sepA, and sdrM) by PCR and quantitative reverse transcription-PCR (qRT-PCR); and for susceptibility to chlorhexidine by MIC and minimum bactericidal concentration (MBC). Furthermore, disinfectant efficacy was tested in the presence of 3.0% bovine serum albumin (BSA) in MBC detection. The plasmid-borne genes qacA/B (83.0%) and smr (77.4%) and overexpressions of chromosome-mediated genes norA (49.0%) and norB (28.8%) were predominantly found in isolates studied, and 90.6% of the isolates revealed tolerance to chlorhexidine. In the presence of BSA, the average MBC of chlorhexidine for these isolates rose to 256 μg/mL. Altogether, our results suggest that surveillance of sensitivity to biocides among MuH MRSA isolates is essential for hospital infection control. 8. Development of a selective culture medium for bifidobacteria, Raffinose-Propionate Lithium Mupirocin (RP-MUP) and assessment of its usage with Petrifilm™ Aerobic Count plates. PubMed Miranda, Rodrigo Otávio; de Carvalho, Antonio Fernandes; Nero, Luís Augusto 2014-05-01 This study aimed to develop a selective culture media to enumerate bifidobacteria in fermented milk and to assess this medium when used with Petrifilm™ AC plates. For this purpose, Bifidobacterium spp., Lactobacillus spp. and Streptococcus thermophilus strains were tested to verify their fermentation patterns for different carbohydrates. All bifidobacteria strains were able to use raffinose. Based on these characteristic, a selective culture medium was proposed (Raffinose-Propionate Lithium Mupirocin, RP-MUP), used with Petrifilm™ AC plates, and was used to enumerate bifidobacteria in fermented milk. RP-MUP performance was assessed by comparing the results with this medium to reference protocols and culture media for bifidobacteria enumeration. RP-MUP, whether used or not with Petrifilm™ AC, presented similar performance to TOS-MUP (ISO 29981), with no significant differences between the mean bifidobacteria counts (p < 0.05) and with high correlation indices (r = 0.99, p < 0.05). As an advantage, reliable results were obtained after just 48 h of incubation when RP-MUP was used with Petrifilm™ AC, instead of the 72 h described in the ISO 29981 protocol. 9. Shoulder weakness in professional baseball pitchers. PubMed Magnusson, S P; Gleim, G W; Nicholas, J A 1994-01-01 The purposes of this study were to: 1) compare shoulder range of motion and strength in professional baseball pitchers (N = 47) compared with age-matched controls (N = 16), and 2) examine the relationship of injury history to strength and range of motion. Based on injury history pitchers were categorized as: 1) none (N = 26), 2) injury requiring conservative intervention (N = 9), or 3) injury requiring surgical intervention (N = 12). Range of motion was measured for internal rotation (IROM) and external rotation (EROM). Eccentric strength was measured by hand-held dynamometer for internal rotation (IR), external rotation (ER), abduction (ABD), and supraspinatus muscle (SUP) strength. Injury history had no effect on strength and range of motion. Dominant EROM was greater in pitchers, P < 0.0001, and controls, P < 0.05, with pitchers having greater EROM motion bilaterally, P < 0.0001. Pitchers were weaker in SUP on the dominant vs nondominant side, P < 0.0001, and on the dominant side for weight adjusted ER, ABD, P < 0.01, and SUP, P < 0.0001, compared with controls. In conclusion, dominance and pitching resulted in soft tissue adaptation. Pitchers displayed weakness in three of four tests by comparison with controls, suggesting that the demands of pitching are insufficient to produce eccentric strength gains and may in fact lead to weakness. Dominant-side SUP weakness in pitchers may reflect subclinical pathology or chronic fatigue. 10. Postselected weak measurement beyond the weak value SciTech Connect Geszti, Tamas 2010-04-15 Closed expressions are derived for the quantum measurement statistics of pre- and postselected Gaussian particle beams. The weakness of the preselection step is shown to compete with the nonorthogonality of postselection in a transparent way. The approach is shown to be useful in analyzing postselection-based signal amplification, allowing measurements to be extended far beyond the range of validity of the well-known Aharonov-Albert-Vaidman limit. Additionally, the present treatment connects postselected weak measurement to the topic of phase-contrast microscopy. 11. Distribution of muscle weakness of central and peripheral origin. PubMed Thijs, R D; Notermans, N C; Wokke, J H; van der Graaf, Y; van Gijn, J 1998-11-01 According to the established clinical tradition about the distribution of weakness, the ratios of flexor/extensor strength of patients with upper motor neuron lesions are expected to be relatively high for the elbow and wrist and low for the knee. To assess the diagnostic value of these patterns of weakness, muscle strength of 70 patients with limb weakness of central or peripheral origin was measured with a hand held dynamometer. The ratios of flexor/extensor strength at the knee, elbow, and wrist did not differ significantly between patients with central or peripheral origin of muscle weakness. The examination of tendon jerks proved to be of more value as a localising feature. The traditional notion about the distribution of weakness in upper motor neuron lesions may be explained by an intrinsically greater strength in antigravity muscles, together with the effects of hypertonia. PMID:9810962 12. Aperiodic Weak Topological Superconductors NASA Astrophysics Data System (ADS) Fulga, I. C.; Pikulin, D. I.; Loring, T. A. 2016-06-01 Weak topological phases are usually described in terms of protection by the lattice translation symmetry. Their characterization explicitly relies on periodicity since weak invariants are expressed in terms of the momentum-space torus. We prove the compatibility of weak topological superconductors with aperiodic systems, such as quasicrystals. We go beyond usual descriptions of weak topological phases and introduce a novel, real-space formulation of the weak invariant, based on the Clifford pseudospectrum. A nontrivial value of this index implies a nontrivial bulk phase, which is robust against disorder and hosts localized zero-energy modes at the edge. Our recipe for determining the weak invariant is directly applicable to any finite-sized system, including disordered lattice models. This direct method enables a quantitative analysis of the level of disorder the topological protection can withstand. 13. Aperiodic Weak Topological Superconductors. PubMed Fulga, I C; Pikulin, D I; Loring, T A 2016-06-24 Weak topological phases are usually described in terms of protection by the lattice translation symmetry. Their characterization explicitly relies on periodicity since weak invariants are expressed in terms of the momentum-space torus. We prove the compatibility of weak topological superconductors with aperiodic systems, such as quasicrystals. We go beyond usual descriptions of weak topological phases and introduce a novel, real-space formulation of the weak invariant, based on the Clifford pseudospectrum. A nontrivial value of this index implies a nontrivial bulk phase, which is robust against disorder and hosts localized zero-energy modes at the edge. Our recipe for determining the weak invariant is directly applicable to any finite-sized system, including disordered lattice models. This direct method enables a quantitative analysis of the level of disorder the topological protection can withstand. PMID:27391744 14. Apple Strength Issues SciTech Connect Syn, C 2009-12-22 Strength of the apple parts has been noticed to decrease, especially those installed by the new induction heating system since the LEP campaign started. Fig. 1 shows the ultimate tensile strength (UTS), yield strength (YS), and elongation of the installed or installation-simulated apples on various systems. One can clearly see the mean values of UTS and YS of the post-LEP parts decreased by about 8 ksi and 6 ksi respectively from those of the pre-LEP parts. The slight increase in elongation seen in Fig.1 can be understood from the weak inverse relationship between the strength and elongation in metals. Fig.2 shows the weak correlation between the YS and elongation of the parts listed in Fig. 1. Strength data listed in Figure 1 were re-plotted as histograms in Figs. 3 and 4. Figs. 3a and 4a show histograms of all UTS and YS data. Figs. 3b and 4b shows histograms of pre-LEP data and Figs. 3c and 4c of post-LEP data. Data on statistical scatter of tensile strengths have been rarely published by material suppliers. Instead, only the minimum 'guaranteed' strength data are typically presented. An example of strength distribution of aluminum 7075-T6 sheet material, listed in Fig. 5, show that its scatter width of both UTS and YS for a single sheet can be about 6 ksi and for multi-lot scatter can be as large as 11 ksi even though the sheets have been produced through well-controlled manufacturing process. By approximating the histograms shown in Figs. 3 and 4 by a Gaussian or similar type of distribution curves, one can plausibly see the strength reductions in the later or more recent apples. The pre-LEP data in Figs. 3b and 4b show wider scatter than the post-LEP data in Figs. 3c and 4c and seem to follow the binomial distribution of strength indicating that the apples might have been made from two different lots of material, either from two different vendors or from two different melts of perhaps slightly different chemical composition by a single vendor. The post 15. Many-body chaos at weak coupling NASA Astrophysics Data System (ADS) Stanford, Douglas 2016-10-01 The strength of chaos in large N quantum systems can be quantified using λ L , the rate of growth of certain out-of-time-order four point functions. We calculate λ L to leading order in a weakly coupled matrix Φ4 theory by numerically diagonalizing a ladder kernel. The computation reduces to an essentially classical problem. 16. Experimental noiseless linear amplification using weak measurements NASA Astrophysics Data System (ADS) Ho, Joseph; Boston, Allen; Palsson, Matthew; Pryde, Geoff 2016-09-01 The viability of quantum communication schemes rely on sending quantum states of light over long distances. However, transmission loss can degrade the signal strength, adding noise. Heralded noiseless amplification of a quantum signal can provide a solution by enabling longer direct transmission distances and by enabling entanglement distillation. The central idea of heralded noiseless amplification—a conditional modification of the probability distribution over photon number of an optical quantum state—is suggestive of a parallel with weak measurement: in a weak measurement, learning partial information about an observable leads to a conditional back-action of a commensurate size. Here we experimentally investigate the application of weak, or variable-strength, measurements to the task of heralded amplification, by using a quantum logic gate to weakly couple a small single-optical-mode quantum state (the signal) to an ancilla photon (the meter). The weak measurement is carried out by choosing the measurement basis of the meter photon and, by conditioning on the meter outcomes, the signal is amplified. We characterise the gain of the amplifier as a function of the measurement strength, and use interferometric methods to show that the operation preserves the coherence of the signal. 17. Strength Testing. ERIC Educational Resources Information Center Londeree, Ben R. 1981-01-01 Postural deviations resulting from strength and flexibility imbalances include swayback, scoliosis, and rounded shoulders. Screening tests are one method for identifying strength problems. Tests for the evaluation of postural problems are described, and exercises are presented for the strengthening of muscles. (JN) 18. Multidrug-resistant clones of community-associated meticillin-resistant Staphylococcus aureus isolated from Chinese children and the resistance genes to clindamycin and mupirocin. PubMed Wang, Lijuan; Liu, Yingchao; Yang, Yonghong; Huang, Guoying; Wang, Chuanqing; Deng, Li; Zheng, Yuejie; Fu, Zhou; Li, Changcong; Shang, Yunxiao; Zhao, Changan; Sun, Mingjiao; Li, Xiangmei; Yu, Sangjie; Yao, Kaihu; Shen, Xuzhuang 2012-09-01 This study aimed to correlate the multidrug resistance (MDR) and sequence type (ST) clones of community-associated (CA) meticillin-resistant Staphylococcus aureus (MRSA) to identify the genes responsible for clindamycin and mupirocin resistance in S. aureus isolates from paediatric hospitals in mainland China. A total of 435 S. aureus isolates were collected. Compared with CA meticillin-susceptible S. aureus (MSSA), the resistance rates of CA-MRSA to ciprofloxacin, chloramphenicol, gentamicin and tetracycline were higher (19.0 vs 2.6 %, P<0.001; 14.7 vs 3.1 %, P<0.001; 14.7 vs 3.1 %, P<0.01; and 46.0 vs 13.3 %, P<0.001, respectively). Compared with hospital-associated (HA)-MRSA, the resistance rates of CA-MRSA to ciprofloxacin, gentamicin, rifampicin, tetracycline and trimethoprim-sulfamethoxazole were lower (19 vs 94.8 %, P<0.001; 14.7 vs 84.4 %, P<0.001; 5.5 vs 88.3 %, P<0.001; 46 vs 94.8 %, P<0.001; and 1.8 vs 9.1 %, P<0.01, respectively). The resistance rates of CA-MRSA, HA-MRSA and CA-MSSA to clindamycin (92.0, 77.9 and 64.1 %, respectively) and erythromycin (85.9, 77.9 and 63.1 %, respectively) were high. The MDR rates (resistance to three or more non-β-lactams) were 49.6, 100 and 14 % in the CA-MRSA, HA-MRSA and CA-MSSA isolates, respectively. Five of seven ST clones in the CA-MRSA isolates, namely ST59, ST338, ST45, ST910 and ST965, had MDR rates of >50 % (67.9, 87.5, 100, 50 and 83.3 %, respectively). The constitutive phenotype of macrolide-lincosamide-streptogramin B (MLS(B)) resistance (69 %) and the ermB gene (38.1 %) predominated among the MLS(B)-resistant CA S. aureus strains. The resistance rate to mupirocin was 2.3 % and plasmids carrying the mupA gene varied in size between 23 and 54.2 kb in six strains with high-level resistance as determined by Southern blot analysis. The present study showed that resistance to non-β-lactams, especially to clindamycin, is high in CA-MRSA isolates from Chinese children and that 19. Weak measure expansive flows NASA Astrophysics Data System (ADS) Lee, Keonhee; Oh, Jumi 2016-01-01 A notion of measure expansivity for flows was introduced by Carrasco-Olivera and Morales in [3] as a generalization of expansivity, and they proved that there were no measure expansive flows on closed surfaces. In this paper we introduce a concept of weak measure expansivity for flows which is really weaker than that of measure expansivity, and show that there is a weak measure expansive flow on a closed surface. Moreover we show that any C1 stably weak measure expansive flow on a C∞ closed manifold M is Ω-stable, and any C1 stably measure expansive flow on M satisfies both Axiom A and the quasi-transversality condition. 20. Coupling-deformed pointer observables and weak values NASA Astrophysics Data System (ADS) Zhang, Yu-Xiang; Wu, Shengjun; Chen, Zeng-Bing 2016-03-01 While the novel applications of weak values have recently attracted wide attention, weak measurement, the usual way to extract weak values, suffers from risky approximations and severe quantum noises. In this paper, we show that the weak-value information can be obtained exactly in strong measurement with postselections, via measuring the coupling-deformed pointer observables, i.e., the observables selected according to the coupling strength. With this approach, we keep all the advantages claimed by weak-measurement schemes and at the same time solve some widely criticized problems thereof, such as the questionable universality, systematical bias, and drastic inefficiency. 1. History of Weak Interactions DOE R&D Accomplishments Database Lee, T. D. 1970-07-01 While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces. 2. Overview of inhalation exposure techniques: strengths and weaknesses. PubMed Pauluhn, Jürgen 2005-07-01 The vast majority of toxicity studies and risk evaluations deal with single chemicals. Due to the growing interest in potential human health risks originating from exposure to environmental pollutants or lifestyle-related complex chemical mixtures, well thought-out tailor-made mechanistic inhalation toxicity studies have been performed. In contrast to the complex mixtures potentially encountered from hazardous waste sites, drinking water disinfection by-products, natural flavoring complexes or the cumulative intake of food additives and pesticide residues, the scientific evaluation of complex airborne mixtures, such as acid aerosols, atmospheres produced by combustion or thermolysis, e.g. residual oil fly ash (ROFA), diesel and gasoline exhaust, and tobacco smoke, or volatile organic chemicals (VOCs) in residential areas, to mention but a few, is a daunting challenge for experimental toxicologists. These challenges include the controlled in situ generation of exposure atmospheres, the compositions of which are often process-determined and metastable. This means that volatile agents may partition with liquid aerosols or be adsorbed onto surfaces of solid aerosols. Similarly, the nature and composition of test atmospheres might change continuously through oxidation and aging of constituents or coagulation of particles. This, in turn, poses additional challenges to the analytical characterization of such complex test atmospheres, including the identification of potential experimental artifacts. Accordingly, highly standardized and controlled inhalation studies are required for hazard identification of complex mixtures and the results of inhalation studies have to be analyzed judiciously due to the great number of experimental variables. These variables may be related to technical issues or to the specific features of the animal model. Although inhalation exposure of animals mimics human exposure best, not all results obtained under such rigorous test conditions might necessarily also occur under real-life exposure conditions. In addition, to simulate experimentally specific use or exposure patterns may impose a particular challenge to traditional approaches in terms of relevant exposure metrics and the analytes chosen to characterize exposure atmospheres. This paper addresses major developments in the discipline of inhalation toxicology with particular emphasis on the state-of-the-art testing of complex mixtures. 3. Cognitive Strengths and Weaknesses Associated with Prader-Willi Syndrome. ERIC Educational Resources Information Center Conners, Frances A.; Rosenquist, Celia J.; Atwell, Julie A.; Klinger, Laura Grofer 2000-01-01 Nine adults with Prader-Willi syndrome (PWS) and nine age- and IQ-matched adults with PWS completed standardized tests of long-term and short-term memory, visual and auditory processing, and reading and mathematics achievement. Contrary to previous findings, long-term memory in PWS subjects was strong relative to IQ and there was no evidence that… 4. Finnish Vocational Education and Training in Comparison: Strengths and Weaknesses ERIC Educational Resources Information Center Virolainen, Maarit; Stenström, Marja-Leena 2014-01-01 The study investigates how the Finnish model of providing initial vocational education and training (IVET) has succeeded in terms of enhancing educational progress and employability. A relatively high level of participation in IVET makes the Finnish model distinctive from those of three other Nordic countries: Denmark, Norway and Sweden. All four… 5. The Strengths and Weaknesses of ISO 9000 in Vocational Education ERIC Educational Resources Information Center Bevans-Gonzales, Theresa L.; Nair, Ajay T. 2004-01-01 ISO 9000 is a set of quality standards that assists an organization to identify, correct and prevent errors, and to promote continual improvement. Educational institutions worldwide are implementing ISO 9000 as they face increasing external pressure to maintain accountability for funding. Similar to other countries, in the United States vocational… 6. Nonexperimental Research: Strengths, Weaknesses and Issues of Precision ERIC Educational Resources Information Center Reio, Thomas G., Jr. 2016-01-01 Purpose: Nonexperimental research, defined as any kind of quantitative or qualitative research that is not an experiment, is the predominate kind of research design used in the social sciences. How to unambiguously and correctly present the results of nonexperimental research, however, remains decidedly unclear and possibly detrimental to applied… 7. Heterogeneous, weakly coupled map lattices NASA Astrophysics Data System (ADS) Sotelo Herrera, M.a. Dolores; San Martín, Jesús; Porter, Mason A. 2016-07-01 Coupled map lattices (CMLs) are often used to study emergent phenomena in nature. It is typically assumed (unrealistically) that each component is described by the same map, and it is important to relax this assumption. In this paper, we characterize periodic orbits and the laminar regime of type-I intermittency in heterogeneous weakly coupled map lattices (HWCMLs). We show that the period of a cycle in an HWCML is preserved for arbitrarily small coupling strengths even when an associated uncoupled oscillator would experience a period-doubling cascade. Our results characterize periodic orbits both near and far from saddle-node bifurcations, and we thereby provide a key step for examining the bifurcation structure of heterogeneous CMLs. 8. Strength nutrition. PubMed Volek, Jeff S 2003-08-01 Muscle strength is determined by muscle size and factors related to neural recruitment. Resistance training is a potent stimulus for increasing muscle size and strength. These increases are, to a large extent, influenced and mediated by changes in hormones that regulate important events during the recovery process following exercise. Provision of nutrients in the appropriate amounts and at the appropriate times is necessary to optimize the recovery process. This review discusses the results of research that has examined the potential for nutrition and dietary supplements to impact the acute response to resistance exercise and chronic adaptations to resistance training. To date, the most promising strategies to augment gains in muscle size and strength appear to be consumption of protein-carbohydrate calories before and after resistance exercise, and creatine supplementation. 9. Weak shock reflection NASA Astrophysics Data System (ADS) Hunter, John K.; Brio, Moysey 2000-05-01 We present numerical solutions of a two-dimensional inviscid Burgers equation which provides an asymptotic description of the Mach reflection of weak shocks. In our numerical solutions, the incident, reflected, and Mach shocks meet at a triple point, and there is a supersonic patch behind the triple point, as proposed by Guderley for steady weak-shock reflection. A theoretical analysis indicates that there is an expansion fan at the triple point, in addition to the three shocks. The supersonic patch is extremely small, and this work is the first time it has been resolved. 10. A Review of the Theory and Research Underlying the StrengthsQuest Program for Students. The Quest for Strengths ERIC Educational Resources Information Center Hodges, Timothy D.; Harter, James K. 2005-01-01 StrengthsQuest is a student program that focuses on strengths rather than weaknesses. It is intended to lead students to discover their natural talents and gain unique and valuable insights into how to develop such talents into strengths--strengths that equip them to succeed and to make important decisions that enable them to balance the demands… 11. Weak interactions, omnivory and emergent food-web properties. PubMed Central Emmerson, Mark; Yearsley, Jon M. 2004-01-01 Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly. PMID:15101699 12. In praise of weakness NASA Astrophysics Data System (ADS) Steinberg, Aephraim; Feizpour, Amir; Rozema; Mahler; Hayat 2013-03-01 Quantum physics is being transformed by a radical new conceptual and experimental approach known as weak measurement that can do everything from tackling basic quantum mysteries to mapping the trajectories of photons in a Young's double-slit experiment. Aephraim Steinberg, Amir Feizpour, Lee Rozema, Dylan Mahler and Alex Hayat unveil the power of this new technique. 13. Hypernuclear Weak Decays NASA Astrophysics Data System (ADS) Itonaga, K.; Motoba, T. The recent theoretical studies of Lambda-hypernuclear weak decaysof the nonmesonic and pi-mesonic ones are developed with the aim to disclose the link between the experimental decay observables and the underlying basic weak decay interactions and the weak decay mechanisms. The expressions of the nonmesonic decay rates Gamma_{nm} and the decay asymmetry parameter alpha_1 of protons from the polarized hypernuclei are presented in the shell model framework. We then introduce the meson theoretical Lambda N -> NN interactions which include the one-meson exchanges, the correlated-2pi exchanges, and the chiral-pair-meson exchanges. The features of meson exchange potentials and their roles on the nonmesonic decays are discussed. With the adoption of the pi + 2pi/rho + 2pi/sigma + omega + K + rhopi/a_1 + sigmapi/a_1 exchange potentials, we have carried out the systematic calculations of the nonmesonic decay observables for light-to-heavy hypernuclei. The present model can account for the available experimental data of the decay rates, Gamma_n/Gamma_p ratios, and the intrinsic asymmetry parameters alpha_Lambda (alpha_Lambda is related to alpha_1) of emitted protons well and consistently within the error bars. The hypernuclear lifetimes are evaluated by converting the total weak decay rates Gamma_{tot} = Gamma_pi + Gamma_{nm} to tau, which exhibit saturation property for the hypernuclear mass A ≥ 30 and agree grossly well with experimental data for the mass range from light to heavy hypernuclei except for the very light ones. Future extensions of the model and the remaining problems are also mentioned. The pi-mesonic weak processes are briefly surveyed, and the calculations and predictions are compared and confirmed by the recent high precision FINUDA pi-mesonic decay data. This shows that the theoretical basis seems to be firmly grounded. 14. Weak Gravitational Lensing NASA Astrophysics Data System (ADS) Pires, Sandrine; Starck, Jean-Luc; Leonard, Adrienne; Réfrégier, Alexandre 2012-03-01 This chapter reviews the data mining methods recently developed to solve standard data problems in weak gravitational lensing. We detail the different steps of the weak lensing data analysis along with the different techniques dedicated to these applications. An overview of the different techniques currently used will be given along with future prospects. Until about 30 years ago, astronomers thought that the Universe was composed almost entirely of ordinary matter: protons, neutrons, electrons, and atoms. The field of weak lensing has been motivated by the observations made in the last decades showing that visible matter represents only about 4-5% of the Universe (see Figure 14.1). Currently, the majority of the Universe is thought to be dark, that is, does not emit electromagnetic radiation. The Universe is thought to be mostly composed of an invisible, pressure less matter - potentially relic from higher energy theories - called "dark matter" (20-21%) and by an even more mysterious term, described in Einstein equations as a vacuum energy density, called "dark energy" (70%). This "dark" Universe is not well described or even understood; its presence is inferred indirectly from its gravitational effects, both on the motions of astronomical objects and on light propagation. So this point could be the next breakthrough in cosmology. Today's cosmology is based on a cosmological model that contains various parameters that need to be determined precisely, such as the matter density parameter Omega_m or the dark energy density parameter Omega_lambda. Weak gravitational lensing is believed to be the most promising tool to understand the nature of dark matter and to constrain the cosmological parameters used to describe the Universe because it provides a method to directly map the distribution of dark matter (see [1,6,60,63,70]). From this dark matter distribution, the nature of dark matter can be better understood and better constraints can be placed on dark energy 15. Composite weak bosons SciTech Connect Suzuki, M. 1988-04-01 Dynamical mechanism of composite W and Z is studied in a 1/N field theory model with four-fermion interactions in which global weak SU(2) symmetry is broken explicitly by electromagnetic interaction. Issues involved in such a model are discussed in detail. Deviation from gauge coupling due to compositeness and higher order loop corrections are examined to show that this class of models are consistent not only theoretically but also experimentally. 16. Sequential weak measurement SciTech Connect Mitchison, Graeme; Jozsa, Richard; Popescu, Sandu ||| 2007-12-15 The notion of weak measurement provides a formalism for extracting information from a quantum system in the limit of vanishing disturbance to its state. Here we extend this formalism to the measurement of sequences of observables. When these observables do not commute, we may obtain information about joint properties of a quantum system that would be forbidden in the usual strong measurement scenario. As an application, we provide a physically compelling characterization of the notion of counterfactual quantum computation. 17. Effect of grain strength distribution on rock fracture SciTech Connect Woffington, Austin 1996-05-01 This report discloses my contributions to the study of grain strength distribution and its effects in computer modeled rock lattices. Frackrock v35.15 developed by Blair and Cook, was used to model bimodal grain strength distribution and test the lattices under stress. New data was gathered by running trials with a standardized weak grain strength and compared to the original data with a mean weak grain strength. The new data set shows lattice failure to be less predictable with a higher percentage of weak sites. Strain on the lattice is affected by the wide distribution of grain strengths: the closer the grain strengths are to each other, the more predictable they get. Further testing needs to be done on larger lattices, boundary less lattices, and multiple grain strength distributions. This will show the effects of size and stress on the grain distribution strength and will add to advancing our knowledge of how rocks crack and break under stressful conditions. 18. Weak A' phenotypes PubMed Central Cartron, J. P.; Gerbal, A.; Hughes-Jones, N. C.; Salmon, C. 1974-01-01 Thirty-five weak A samples including fourteen A3, eight Ax, seven Aend, three Am and three Ae1 were studied in order to determine their A antigen site density, using an IgG anti-A labelled with 125I. The values obtained ranged between 30,000 A antigen sites for A3 individuals, and 700 sites for the Ae1 red cells. The hierarchy of values observed made it possible to establish a quantitative relationship between the red cell agglutinability of these phenotypes measured under standard conditions, and their antigen site density. PMID:4435836 19. Weakly broken galileon symmetry SciTech Connect Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico; Vernizzi, Filippo 2015-09-01 Effective theories of a scalar ϕ invariant under the internal galileon symmetryϕ→ϕ+b{sub μ}x{sup μ} have been extensively studied due to their special theoretical and phenomenological properties. In this paper, we introduce the notion of weakly broken galileon invariance, which characterizes the unique class of couplings of such theories to gravity that maximally retain their defining symmetry. The curved-space remnant of the galileon’s quantum properties allows to construct (quasi) de Sitter backgrounds largely insensitive to loop corrections. We exploit this fact to build novel cosmological models with interesting phenomenology, relevant for both inflation and late-time acceleration of the universe. 20. Weak decay of hypernuclei SciTech Connect Grace, R. 1983-01-01 The Moby Dick spectrometer (at BNL) in coincidence with a range spectrometer and a TOF neutron detector will be used to study the weak decay modes of /sup 12/C. The Moby Dick spectrometer will be used to reconstruct and tag events in which specific hypernuclear states are formed in the reaction K/sup -/ + /sup 12/C ..-->.. ..pi../sup -/ + /sup 12/C. Subsequent emission of decay products (pions, protons and neutrons) in coincidence with the fast forward pion will be detected in a time and range spectrometer, and a neutron detector. 1. Weakly broken galileon symmetry SciTech Connect Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico; Vernizzi, Filippo E-mail: luca.santoni@sns.it E-mail: filippo.vernizzi@cea.fr 2015-09-01 Effective theories of a scalar φ invariant under the internal galileon symmetry φ→φ+b{sub μ} x{sup μ} have been extensively studied due to their special theoretical and phenomenological properties. In this paper, we introduce the notion of weakly broken galileon invariance, which characterizes the unique class of couplings of such theories to gravity that maximally retain their defining symmetry. The curved-space remnant of the galileon's quantum properties allows to construct (quasi) de Sitter backgrounds largely insensitive to loop corrections. We exploit this fact to build novel cosmological models with interesting phenomenology, relevant for both inflation and late-time acceleration of the universe. 2. Corium crust strength measurements. SciTech Connect Lomperski, S.; Nuclear Engineering Division 2009-11-01 Corium strength is of interest in the context of a severe reactor accident in which molten core material melts through the reactor vessel and collects on the containment basemat. Some accident management strategies involve pouring water over the melt to solidify it and halt corium/concrete interactions. The effectiveness of this method could be influenced by the strength of the corium crust at the interface between the melt and coolant. A strong, coherent crust anchored to the containment walls could allow the yet-molten corium to fall away from the crust as it erodes the basemat, thereby thermally decoupling the melt from the coolant and sharply reducing the cooling rate. This paper presents a diverse collection of measurements of the mechanical strength of corium. The data is based on load tests of corium samples in three different contexts: (1) small blocks cut from the debris of the large-scale MACE experiments, (2) 30 cm-diameter, 75 kg ingots produced by SSWICS quench tests, and (3) high temperature crusts loaded during large-scale corium/concrete interaction (CCI) tests. In every case the corium consisted of varying proportions of UO{sub 2}, ZrO{sub 2}, and the constituents of concrete to represent a LWR melt at different stages of a molten core/concrete interaction. The collection of data was used to assess the strength and stability of an anchored, plant-scale crust. The results indicate that such a crust is likely to be too weak to support itself above the melt. It is therefore improbable that an anchored crust configuration could persist and the melt become thermally decoupled from the water layer to restrict cooling and prolong an attack of the reactor cavity concrete. 3. QM02 Strength Measurement SciTech Connect Welch, J; Wu, J.; /SLAC 2010-11-24 In late April, Paul Emma reported that his orbit fitting program could find a reasonably good fit only if the strength of QM02 was changed from design value of -5.83 kG to -6.25 kG - a strength change of 7.3%. In late May, we made a focal length measurement of QM02 by turning off all focusing optics between YC07 and BPMS1 (in the spectrometer line) except for QM02 and adjusted the strength of QM02 so that vertical kicks by YC07 did not produce any displacements at BPMS1 (see Figure 1). The result was quoted in the LCLS elog was that QM02 appeared to 6% too weak, and approximately agreed with Paul's observation. The analysis used for the entry in the log book was based on the thin lens approximation and used the following numbers: Distance YC07 to QM02 - 5.128 m; Distance QM02 to BPMS1 - 1.778 m; and Energy - 135 MeV. These distances were computed from the X,Z coordinates given the on the large plot of the Injector on the wall of the control room. On review of the MAD output file coordinates, it seems that the distance used for QM02 to BPMS1 is not 1.778 m. The correct value is Distance, center of QM02 to BPMS1 - 1.845 m. There may be a typo on the wall chart values for the coordinates of BPMS1, or perhaps there was a misinterpretation of edge versus center of QM02. In any case, the effect of this change is that the thin lens estimate changes from 6% too weak to 9% too weak. At John Galayda's suggestion, we looked into the thin lens versus thick lens approximation. A Mathematica program was written to solve for the K value of the QM02, in the thick lens approximation, that provides point to point focusing from YC07 to BPMS1, and to compare this number with the value obtained using the thin lens approximation. The length of QM02 used in the thick lens calculation is the effective length determined by magnetic measurements of 0.108 m. The result of the Mathematica calculation is that the thin lens approximation predicts less magnet strength is required to produce the 4. Weakly relativistic plasma expansion SciTech Connect Fermous, Rachid Djebli, Mourad 2015-04-15 Plasma expansion is an important physical process that takes place in laser interactions with solid targets. Within a self-similar model for the hydrodynamical multi-fluid equations, we investigated the expansion of both dense and under-dense plasmas. The weakly relativistic electrons are produced by ultra-intense laser pulses, while ions are supposed to be in a non-relativistic regime. Numerical investigations have shown that relativistic effects are important for under-dense plasma and are characterized by a finite ion front velocity. Dense plasma expansion is found to be governed mainly by quantum contributions in the fluid equations that originate from the degenerate pressure in addition to the nonlinear contributions from exchange and correlation potentials. The quantum degeneracy parameter profile provides clues to set the limit between under-dense and dense relativistic plasma expansions at a given density and temperature. 5. Reptation in a Weak Driving Field NASA Astrophysics Data System (ADS) Aalberts, Daniel; van Leeuwen, J. M. J. 1997-03-01 A simplified model of reptation is presented. The Master Equation of the model is systematically solved by expansion in powers of the strength of the driving field. From the explicit form of the probability distribution, exact conclusions can be drawn about the average shape of the polymer, its drift velocity, and the zero field diffusion constant. Correlations between segments of the chain are calculated and turn out to be large, even in the weak driving field limit. The results are compared with simulations of the model. 6. Strong mobility in weakly disordered systems SciTech Connect Ben-naim, Eli; Krapivsky, Pavel 2009-01-01 We study transport of interacting particles in weakly disordered media. Our one-dimensional system includes (i) disorder, the hopping rate governing the movement of a particle between two neighboring lattice sites is inhomogeneous, and (ii) hard core interaction, the maximum occupancy at each site is one particle. We find that over a substantial regime, the root-mean-square displacement of a particle s grows superdiffusively with time t, {sigma}{approx}({epsilon}t){sup 2/3}, where {epsilon} is the disorder strength. Without disorder the particle displacement is subdiffusive, {sigma} {approx}t{sup 1/4}, and therefore disorder strongly enhances particle mobility. We explain this effect using scaling arguments, and verify the theoretical predictions through numerical simulations. Also, the simulations show that regardless of disorder strength, disorder leads to stronger mobility over an intermediate time regime. 7. Enhancing entanglement trapping by weak measurement and quantum measurement reversal SciTech Connect Zhang, Ying-Jie; Han, Wei; Fan, Heng; Xia, Yun-Jie 2015-03-15 In this paper, we propose a scheme to enhance trapping of entanglement of two qubits in the environment of a photonic band gap material. Our entanglement trapping promotion scheme makes use of combined weak measurements and quantum measurement reversals. The optimal promotion of entanglement trapping can be acquired with a reasonable finite success probability by adjusting measurement strengths. - Highlights: • Propose a scheme to enhance entanglement trapping in photonic band gap material. • Weak measurement and its reversal are performed locally on individual qubits. • Obtain an optimal condition for maximizing the concurrence of entanglement trapping. • Entanglement sudden death can be prevented by weak measurement in photonic band gap. 8. Weak measurements beyond the Aharonov-Albert-Vaidman formalism SciTech Connect Wu Shengjun; Li Yang 2011-05-15 We extend the idea of weak measurements to the general case, provide a complete treatment, and obtain results for both the regime when the preselected and postselected states (PPS) are almost orthogonal and the regime when they are exactly orthogonal. We surprisingly find that for a fixed interaction strength, there may exist a maximum signal amplification and a corresponding optimum overlap of PPS to achieve it. For weak measurements in the orthogonal regime, we find interesting quantities that play the same role that weak values play in the nonorthogonal regime. 9. Dynamic Strength Ceramic Nanocomposites Under Pulse Loading NASA Astrophysics Data System (ADS) Skripnyak, Evgeniya G.; Skripnyak, Vladimir V.; Vaganova, Irina K.; Skripnyak, Vladimir A. 2015-06-01 Multi-scale computer simulation approach has been applied to research of strength of nanocomposites under dynamic loading. The influence of mesoscopic substructures on the dynamic strength of ceramic and hybrid nanocomposites, which can be formed using additive manufacturing were numerically investigated. At weak shock wave loadings the shear strength and the spall strength of ceramic and hybrid nanocomposites depends not only phase concentration and porosity, but size parameters of skeleton substructures. The influence of skeleton parameter on the shear strength and the spall strength of ceramic nanocomposites with the same concentration of phases decreases with increasing amplitude of the shock pulse of microsecond duration above the double amplitude of the Hugoniot elastic limit of nanocomposites. This research carried out in 2014 -2015 was supported by grant from The Tomsk State University Academic D.I. Mendeleev Fund Program and also Ministry of Sciences and Education of Russian Federation (State task 2014/223, project 1943, Agreement 14.132. 10. Plasma transport theory spanning weak to strong coupling SciTech Connect Daligault, Jérôme; Baalrud, Scott D. 2015-06-29 We describe some of the most striking characteristics of particle transport in strongly coupled plasmas across a wide range of Coulomb coupling strength. We then discuss the effective potential theory, which is an approximation that was recently developed to extend conventional weakly coupled plasma transport theory into the strongly coupled regime in a manner that is practical to evaluate efficiently. 11. Negative probabilities and information gain in weak measurements NASA Astrophysics Data System (ADS) Zhu, Xuanmin; Wei, Qun; Liu, Quanhui; Wu, Shengjun 2013-11-01 We study the outcomes in a general measurement with postselection, and derive upper bounds for the pointer readings in weak measurement. The probabilities inferred from weak measurements change along with the coupling strength; and the true probabilities can be obtained when the coupling is strong enough. By calculating the information gain of the measuring device about which path the particles pass through, we show that the “negative probabilities” only emerge for cases when the information gain is little due to very weak coupling between the measuring device and the particles. When the coupling strength increases, we can unambiguously determine whether a particle passes through a given path every time, hence the average shifts always represent true probabilities, and the strange “negatives probabilities” disappear. 12. Application of Strength Diagnosis. ERIC Educational Resources Information Center Newton, Robert U.; Dugan, Eric 2002-01-01 Discusses the various strength qualities (maximum strength, high- and low-load speed strength, reactive strength, rate of force development, and skill performance), noting why a training program design based on strength diagnosis can lead to greater efficacy and better performance gains for the athlete. Examples of tests used to assess strength… 13. Experimental investigations of weak definite and weak indefinite noun phrases PubMed Central Klein, Natalie M.; Gegg-Harrison, Whitney M.; Carlson, Greg N.; Tanenhaus, Michael K. 2013-01-01 Definite noun phrases typically refer to entities that are uniquely identifiable in the speaker and addressee’s common ground. Some definite noun phrases (e.g. the hospital in Mary had to go the hospital and John did too) seem to violate this uniqueness constraint. We report six experiments that were motivated by the hypothesis that these “weak definite” interpretations arise in “incorporated” constructions. Experiments 1-3 compared nouns that seem to allow for a weak definite interpretation (e.g. hospital, bank, bus, radio) with those that do not (e.g. farm, concert, car, book). Experiments 1 and 2 used an instruction-following task and picture-judgment task, respectively, to demonstrate that a weak definite need not uniquely refer. In Experiment 3 participants imagined scenarios described by sentences such as The Federal Express driver had to go to the hospital/farm. The imagined scenarios following weak definite noun phrases were more likely to include conventional activities associated with the object, whereas following regular nouns, participants were more likely to imagine scenarios that included typical activities associated with the subject; similar effects were observed with weak indefinites. Experiment 4 found that object-related activities were reduced when the same subject and object were used with a verb that does not license weak definite interpretations. In Experiment 5, a science fiction story introduced an artificial lexicon for novel concepts. Novel nouns that shared conceptual properties with English weak definite nouns were more likely to allow weak reference in a judgment task. Experiment 6 demonstrated that familiarity for definite articles and anti- familiarity for indefinite articles applies to the activity associated with the noun, consistent with predictions made by the incorporation analysis. PMID:23685208 14. Resisting Weakness of the Will PubMed Central Levy, Neil 2012-01-01 I develop an account of weakness of the will that is driven by experimental evidence from cognitive and social psychology. I will argue that this account demonstrates that there is no such thing as weakness of the will: no psychological kind corresponds to it. Instead, weakness of the will ought to be understood as depletion of System II resources. Neither the explanatory purposes of psychology nor our practical purposes as agents are well-served by retaining the concept. I therefore suggest that we ought to jettison it, in favour of the vocabulary and concepts of cognitive psychology. PMID:22984298 15. Weak-shock reflection factors SciTech Connect Reichenbach, H.; Kuhl, A.L. 1993-09-07 The purpose of this paper is to compare reflection factors for weak shocks from various surfaces, and to focus attention on some unsolved questions. Three different cases are considered: square-wave planar shock reflection from wedges; square-wave planar shock reflection from cylinders; and spherical blast wave reflection from a planar surface. We restrict ourselves to weak shocks. Shocks with a Mach number of M{sub O} < 1.56 in air or with an overpressure of {Delta}{sub PI} < 25 psi (1.66 bar) under normal ambient conditions are called weak. 16. Weak interactions and presupernova evolution SciTech Connect Aufderheide, M.B. State Univ. of New York . Dept. of Physics) 1991-02-19 The role of weak interactions, particularly electron capture and {beta}{sup {minus}} decay, in presupernova evolution is discussed. The present uncertainty in these rates is examined and the possibility of improving the situation is addressed. 12 refs., 4 figs. 17. Precision Metrology Using Weak Measurements NASA Astrophysics Data System (ADS) Zhang, Lijian; Datta, Animesh; Walmsley, Ian A. 2015-05-01 Weak values and measurements have been proposed as a means to achieve dramatic enhancements in metrology based on the greatly increased range of possible measurement outcomes. Unfortunately, the very large values of measurement outcomes occur with highly suppressed probabilities. This raises three vital questions in weak-measurement-based metrology. Namely, (Q1) Does postselection enhance the measurement precision? (Q2) Does weak measurement offer better precision than strong measurement? (Q3) Is it possible to beat the standard quantum limit or to achieve the Heisenberg limit with weak measurement using only classical resources? We analyze these questions for two prototypical, and generic, measurement protocols and show that while the answers to the first two questions are negative for both protocols, the answer to the last is affirmative for measurements with phase-space interactions, and negative for configuration space interactions. Our results, particularly the ability of weak measurements to perform at par with strong measurements in some cases, are instructive for the design of weak-measurement-based protocols for quantum metrology. 18. Comparison of quadriceps strength and handgrip strength in their association with health outcomes in older adults in primary care. PubMed Chan, On Ying A; van Houwelingen, Anne H; Gussekloo, Jacobijn; Blom, Jeanet W; den Elzen, Wendy P J 2014-01-01 Sarcopenia is thought to play a major role in the functional impairment that occurs with old age. In clinical practice, sarcopenia is often determined by measuring handgrip strength. Here, we compared the lower limb quadriceps strength to the handgrip strength in their association with health outcomes in older adults in primary care. Our study population consisted of older adults (n = 764, 68.2% women, median age 83) that participated in the Integrated Systemic Care for Older People (ISCOPE) study. Participants were visited at baseline to measure quadriceps strength and handgrip strength. Data on health outcomes were obtained at baseline and after 12 months (including life satisfaction, disability in daily living, GP contact-time and hospitalization). Quadriceps strength and handgrip strength showed a weak association (β = 0.42 [95% CI 0.33-0.50]; R (2) = 0.17). Quadriceps strength and handgrip strength were independently associated with health outcomes at baseline, including quality of life, disability in daily living, GP contact-time, hospitalization, and gait speed. Combined weakness of the quadriceps and handgrip distinguished a most vulnerable subpopulation that presented with the poorest health outcomes. At follow-up, handgrip strength showed an association with quality of life (β = 0.05; P = 0.002) and disability in daily living (β = -0.5; P = 0.004). Quadriceps weakness did not further contribute to the prediction of the measured health outcomes. We conclude that quadriceps strength is only moderately associated with handgrip strength in an older population and that the combination of quadriceps strength and handgrip strength measurements may aid in the identification of older adults in primary care with the poorest health outcomes. In the prediction of poor health outcomes, quadriceps strength measurements do not show an added value to the handgrip strength. 19. Early exercise rehabilitation of muscle weakness in acute respiratory failure patients. PubMed Berry, Michael J; Morris, Peter E 2013-10-01 Acute respiratory failure patients experience significant muscle weakness, which contributes to prolonged hospitalization and functional impairments after hospital discharge. Based on our previous work, we hypothesize that an exercise intervention initiated early in the intensive care unit aimed at improving skeletal muscle strength could decrease hospital stay and attenuate the deconditioning and skeletal muscle weakness experienced by these patients. 20. Emergent soft monopole modes in weakly bound deformed nuclei NASA Astrophysics Data System (ADS) Pei, J. C.; Kortelainen, M.; Zhang, Y. N.; Xu, F. R. 2014-11-01 Based on the Hartree-Fock-Bogoliubov solutions in large deformed coordinate spaces, the finite amplitude method for the quasiparticle random-phase approximation (FAM-QRPA) has been implemented, providing a suitable approach to probing collective excitations of weakly bound nuclei embedded in the continuum. The monopole excitation modes in magnesium isotopes up to the neutron drip line have been studied with the FAM-QRPA framework on both the coordinate-space and harmonic oscillator basis methods. Enhanced soft monopole strengths and collectivity as a result of weak-binding effects have been unambiguously demonstrated. 1. Influence of environmental noise on the weak value amplification NASA Astrophysics Data System (ADS) Zhu, Xuannmin; Zhang, Yu-Xiang 2016-08-01 Quantum systems are always disturbed by environmental noise. We have investigated the influence of the environmental noise on the amplification in weak measurements. Three typical quantum noise processes are discussed in this article. The maximum expectation values of the observables of the measuring device decrease sharply with the strength of the depolarizing and phase damping channels, while the amplification effect of weak measurement is immune to the amplitude damping noise. To obtain significantly amplified signals, we must ensure that the preselection quantum systems are kept away from the depolarizing and phase damping processes. 2. Celebrate Strengths, Nurture Affinities: A Conversation with Mel Levine ERIC Educational Resources Information Center Scherer, Marge 2006-01-01 In this interview with "Educational Leadership," pediatrician Dr. Mel Levine, cofounder of "All Kinds of Minds," explains why students and educators should learn about eight neurodevelopmental functions that undergird our strengths and weaknesses. For the most part, he notes, adults who lead successful lives mobilize their strengths and compensate… 3. Conformational transitions of a weak polyampholyte NASA Astrophysics Data System (ADS) Narayanan Nair, Arun Kumar; Uyaver, Sahin; Sun, Shuyu 2014-10-01 Using grand canonical Monte Carlo simulations of a flexible polyelectrolyte where the charges are in contact with a reservoir of constant chemical potential given by the solution pH, we study the behavior of weak polyelectrolytes in poor and good solvent conditions for polymer backbone. We address the titration behavior and conformational properties of a flexible diblock polyampholyte chain formed of two oppositely charged weak polyelectrolyte blocks, each containing equal number of identical monomers. The change of solution pH induces charge asymmetry in a diblock polyampholyte. For diblock polyampholyte chains in poor solvents, we demonstrate that a discontinuous transition between extended (tadpole) and collapsed (globular) conformational states is attainable by varying the solution pH. The double-minima structure in the probability distribution of the free energy provides direct evidence for the first-order like nature of this transition. At the isoelectric point electrostatically driven coil-globule transition of diblock polyampholytes in good solvents is found to consist of different regimes identified with increasing electrostatic interaction strength. At pH values above or below the isoelectric point diblock chains are found to have polyelectrolyte-like behavior due to repulsion between uncompensated charges along the chain. 4. Weak Energy: Form and Function NASA Astrophysics Data System (ADS) Parks, Allen D. The equation of motion for a time-dependent weak value of a quantum mechanical observable contains a complex valued energy factor—the weak energy of evolution. This quantity is defined by the dynamics of the pre-selected and post-selected states which specify the observable's weak value. It is shown that this energy: (i) is manifested as dynamical and geometric phases that govern the evolution of the weak value during the measurement process; (ii) satisfies the Euler-Lagrange equations when expressed in terms of Pancharatnam (P) phase and Fubini-Study (FS) metric distance; (iii) provides for a PFS stationary action principle for quantum state evolution; (iv) time translates correlation amplitudes; (v) generalizes the temporal persistence of state normalization; and (vi) obeys a time-energy uncertainty relation. A similar complex valued quantity—the pointed weak energy of an evolving quantum state—is also defined and several of its properties in PFS coordinates are discussed. It is shown that the imaginary part of the pointed weak energy governs the state's survival probability and its real part is—to within a sign—the Mukunda-Simon geometric phase for arbitrary evolutions or the Aharonov-Anandan (AA) geometric phase for cyclic evolutions. Pointed weak energy gauge transformations and the PFS 1-form are defined and discussed and the relationship between the PFS 1-form and the AA connection 1-form is established. [Editors note: for a video of the talk given by Prof. Parks at the Aharonov-80 conference in 2012 at Chapman University, see http://quantum.chapman.edu/talk-25. 5. Weak Selection and Protein Evolution PubMed Central Akashi, Hiroshi; Osada, Naoki; Ohta, Tomoko 2012-01-01 The “nearly neutral” theory of molecular evolution proposes that many features of genomes arise from the interaction of three weak evolutionary forces: mutation, genetic drift, and natural selection acting at its limit of efficacy. Such forces generally have little impact on allele frequencies within populations from generation to generation but can have substantial effects on long-term evolution. The evolutionary dynamics of weakly selected mutations are highly sensitive to population size, and near neutrality was initially proposed as an adjustment to the neutral theory to account for general patterns in available protein and DNA variation data. Here, we review the motivation for the nearly neutral theory, discuss the structure of the model and its predictions, and evaluate current empirical support for interactions among weak evolutionary forces in protein evolution. Near neutrality may be a prevalent mode of evolution across a range of functional categories of mutations and taxa. However, multiple evolutionary mechanisms (including adaptive evolution, linked selection, changes in fitness-effect distributions, and weak selection) can often explain the same patterns of genome variation. Strong parameter sensitivity remains a limitation of the nearly neutral model, and we discuss concave fitness functions as a plausible underlying basis for weak selection. PMID:22964835 6. Spin freezing in geometrically frustrated antiferromagnets with weak disorder. PubMed Saunders, T E; Chalker, J T 2007-04-13 We investigate the consequences for geometrically frustrated antiferromagnets of weak disorder in the strength of exchange interactions. Taking as a model the classical Heisenberg antiferromagnet with nearest neighbor exchange on the pyrochlore lattice, we examine low-temperature behavior. We show that spatial modulation of exchange generates long-range effective interactions within the extensively degenerate ground states of the clean system. Using Monte Carlo simulations, we find a spin glass transition at a temperature set by the disorder strength. Disorder of this type, which is generated by random strains in the presence of magnetoelastic coupling, may account for the spin freezing observed in many geometrically frustrated magnets. 7. Warping the Weak Gravity Conjecture NASA Astrophysics Data System (ADS) Kooner, Karta; Parameswaran, Susha; Zavala, Ivonne 2016-08-01 The Weak Gravity Conjecture, if valid, rules out simple models of Natural Inflation by restricting their axion decay constant to be sub-Planckian. We revisit stringy attempts to realise Natural Inflation, with a single open string axionic inflaton from a probe D-brane in a warped throat. We show that warped geometries can allow the requisite super-Planckian axion decay constant to be achieved, within the supergravity approximation and consistently with the Weak Gravity Conjecture. Preliminary estimates of the brane backreaction suggest that the probe approximation may be under control. However, there is a tension between large axion decay constant and high string scale, where the requisite high string scale is difficult to achieve in all attempts to realise large field inflation using perturbative string theory. We comment on the Generalized Weak Gravity Conjecture in the light of our results. 8. State tomography via weak measurements PubMed Central Wu, Shengjun 2013-01-01 Recent work has revealed that the wave function of a pure state can be measured directly and that complementary knowledge of a quantum system can be obtained simultaneously by weak measurements. However, the original scheme applies only to pure states, and it is not efficient because most of the data are discarded by post-selection. Here, we propose tomography schemes for pure states and for mixed states via weak measurements, and our schemes are more efficient because we do not discard any data. Furthermore, we demonstrate that any matrix element of a general state can be directly read from an appropriate weak measurement. The density matrix (with all of its elements) represents all that is directly accessible from a general measurement. PMID:23378924 9. Cosmology and the weak interaction NASA Technical Reports Server (NTRS) Schramm, David N. 1989-01-01 The weak interaction plays a critical role in modern Big Bang cosmology. Two of its most publicized comological connections are emphasized: big bang nucleosynthesis and dark matter. The first of these is connected to the cosmological prediction of neutrine flavors, N(sub nu) is approximately 3 which in now being confirmed. The second is interrelated to the whole problem of galacty and structure formation in the universe. The role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure is demonstrated. 10. Weak value amplification considered harmful NASA Astrophysics Data System (ADS) Ferrie, Christopher; Combes, Joshua 2014-03-01 We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise. 11. Cosmology and the weak interaction SciTech Connect Schramm, D.N. ):) 1989-12-01 The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N{sub {nu}} {approximately} 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs. 12. Growth and decay of weak shock waves in magnetogasdynamics NASA Astrophysics Data System (ADS) Singh, L. P.; Singh, D. B.; Ram, S. D. 2015-12-01 The purpose of the present study is to investigate the problem of the propagation of weak shock waves in an inviscid, electrically conducting fluid under the influence of a magnetic field. The analysis assumes the following two cases: (1) a planar flow with a uniform transverse magnetic field and (2) cylindrically symmetric flow with a uniform axial or varying azimuthal magnetic field. A system of two coupled nonlinear transport equations, governing the strength of a shock wave and the first-order discontinuity induced behind it, are derived that admit a solution that agrees with the classical decay laws for a weak shock. An analytic expression for the determination of the shock formation distance is obtained. How the magnetic field strength, whether axial or azimuthal, influences the shock formation is also assessed. 13. Observationally determined Fe II oscillator strengths NASA Astrophysics Data System (ADS) Shull, J. M.; van Steenberg, M.; Seab, C. G. 1983-08-01 Absorption oscillator strengths for 21 Fe II resonance lines, have been determined using a curve-of-growth analysis of interstellar data from the Copernicus and International Ultraviolet Explorer (IUE) satellites. In addition to slight changes in strengths of the far-UV lines, new f-values are reported for wavelength 1608.45, a prominent line in interstellar and quasar absorption spectra, and for wavelength 2260.08, a weak, newly identified linen in IUE interstellar spectra. An upper limit on the strength of the undetected line at 2366.867 A (UV multiplet 2) is set. Using revised oscillator strengths, Fe II column densities toward 13 OB stars are derived. The interstellar depletions, (Fe/H), relative to solar values range between factors of 10 and 120. 14. Monopole Strength Function of Deformed Superfluid Nuclei SciTech Connect Stoitsov, M. V.; Kortelainen, E. M.; Nakatsukasa, T.; Losa, C.; Nazarewicz, Witold 2011-01-01 We present an efficient method for calculating strength functions using the finite amplitude method (FAM) for deformed superfluid heavy nuclei within the framework of the nuclear density functional theory. We demonstrate that FAM reproduces strength functions obtained with the fully self-consistent quasi-particle random-phase approximation (QRPA) at a fraction of computational cost. As a demonstration, we compute the isoscalar and isovector monopole strength for strongly deformed configurations in ^{240}Pu by considering huge quasi-particle QRPA spaces. Our approach to FAM, based on Broyden's iterative procedure, opens the possibility for large-scale calculations of strength distributions in well-bound and weakly bound nuclei across the nuclear landscape. 15. Flexibility and Muscular Strength. ERIC Educational Resources Information Center Liemohn, Wendell 1988-01-01 This definition of flexibility and muscular strength also explores their roles in overall physical fitness and focuses on how increased flexibility and muscular strength can help decrease or eliminate lower back pain. (CB) 16. Muscle Weakness Thresholds for Prediction of Diabetes in Adults PubMed Central Peterson, Mark D.; Zhang, Peng; Choksi, Palak; Markides, Kyriakos S.; Al Snih, Soham 2016-01-01 Background Despite the known links between weakness and early mortality, what remains to be fully understood is the extent to which strength preservation is associated with protection from cardiometabolic diseases such as diabetes. Purpose The purposes of this study were to determine the association between muscle strength and diabetes among adults, and to identify age- and sex-specific thresholds of low strength for detection of risk. Methods A population-representative sample of 4,066 individuals, aged 20–85 years, was included from the combined 2011–2012 National Health and Nutrition Examination Survey datasets. Strength was assessed using a hand-held dynamometer, and the single largest reading from either hand was normalized to body mass. A logistic regression model was used to assess the association between normalized grip strength and risk of diabetes, as determined by hemoglobin A1c (HbA1c) levels (≥6.5% [≥48 mmol/mol]), while controlling for sociodemographic characteristics, anthropometric measures, and television viewing time. Results For every 0.05 decrement in normalized strength, there was a 1.26 times increased adjusted odds for diabetes in men and women. Women were at lower odds of having diabetes (OR: 0.49; 95% CI: 0.29–0.82), whereas age, waist circumference and lower income were inversely associated. Optimal sex- and age-specific weakness thresholds to detect diabetes were 0.56, 0.50, and 0.45 for men, and 0.42, 0.38, and 0.33 for women, for ages 20–39 years, 40–59 years, and 60–80 years. Conclusions and Clinical Relevance We present thresholds of strength that can be incorporated into a clinical setting for identifying adults that are at risk for developing diabetes, and that might benefit from lifestyle interventions to reduce risk. PMID:26744337 17. Cosmology with weak lensing surveys. PubMed Munshi, Dipak; Valageas, Patrick 2005-12-15 Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening mass. Since the lensing effects arise from deflections of the light rays due to fluctuations of the gravitational potential, they can be directly related to the underlying density field of the large-scale structures. Weak gravitational surveys are complementary to both galaxy surveys and cosmic microwave background observations as they probe unbiased nonlinear matter power spectra at medium redshift. Ongoing CMBR experiments such as WMAP and a future Planck satellite mission will measure the standard cosmological parameters with unprecedented accuracy. The focus of attention will then shift to understanding the nature of dark matter and vacuum energy: several recent studies suggest that lensing is the best method for constraining the dark energy equation of state. During the next 5 year period, ongoing and future weak lensing surveys such as the Joint Dark Energy Mission (JDEM; e.g. SNAP) or the Large-aperture Synoptic Survey Telescope will play a major role in advancing our understanding of the universe in this direction. In this review article, we describe various aspects of probing the matter power spectrum and the bi-spectrum and other related statistics with weak lensing surveys. This can be used to probe the background dynamics of the universe as well as the nature of dark matter and dark energy. 18. Weak localization of seismic waves. PubMed Larose, E; Margerin, L; Van Tiggelen, B A; Campillo, M 2004-07-23 We report the observation of weak localization of seismic waves in a natural environment. It emerges as a doubling of the seismic energy around the source within a spot of the width of a wavelength, which is several tens of meters in our case. The characteristic time for its onset is the scattering mean-free time that quantifies the internal heterogeneity. 19. Cosmology with weak lensing surveys. PubMed Munshi, Dipak; Valageas, Patrick 2005-12-15 Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening mass. Since the lensing effects arise from deflections of the light rays due to fluctuations of the gravitational potential, they can be directly related to the underlying density field of the large-scale structures. Weak gravitational surveys are complementary to both galaxy surveys and cosmic microwave background observations as they probe unbiased nonlinear matter power spectra at medium redshift. Ongoing CMBR experiments such as WMAP and a future Planck satellite mission will measure the standard cosmological parameters with unprecedented accuracy. The focus of attention will then shift to understanding the nature of dark matter and vacuum energy: several recent studies suggest that lensing is the best method for constraining the dark energy equation of state. During the next 5 year period, ongoing and future weak lensing surveys such as the Joint Dark Energy Mission (JDEM; e.g. SNAP) or the Large-aperture Synoptic Survey Telescope will play a major role in advancing our understanding of the universe in this direction. In this review article, we describe various aspects of probing the matter power spectrum and the bi-spectrum and other related statistics with weak lensing surveys. This can be used to probe the background dynamics of the universe as well as the nature of dark matter and dark energy. PMID:16286284 20. N-{Delta} weak transition SciTech Connect Graczyk, Krzysztof M. 2011-11-23 A short review of the Rein-Sehgal and isobar models is presented. The attention is focused on the nucleon-{Delta}(1232) weak transition form-factors. The results of the recent re-analyses of the ANL and BNL bubble chamber neutrino-deuteron scattering data are discussed. 1. Polymer-solid interfaces: Structure and strength NASA Astrophysics Data System (ADS) Gong, Liezhong 1999-12-01 This thesis explored the influence of sticker group concentration (φ), interaction parameter between sticker groups and the solid substrate (chi), and bonding time on the structure and strength of polymer-solid interfaces. Carboxylated polybutadienes (cPBD's) with different COOH concentrations (φ) were synthesized through hydrocarboxylation of high molecular weight polybutadienes. The COOH groups were randomly distributed along the polymer chains and selectively introduced to the pendant double bonds. The influence of φ and chi on the peel strength (GIC) of two interfaces, cPBD-Al and cPBD-AlS (amine terminated silane modified Al), was investigated using the T-peel test. Counter-intuitively, with increasing φ, GIC of both interfaces increased first and then decreased after passing through a maximum strength. At a constant φ, GIC increased with chi. Additionally, the bonding dynamics was strongly dependent on φ and chi and the time scale was several orders of Magnitude longer than the characteristic relaxation time of polybutadiene. The adhesion dynamics was controlled by the slow frustrated surface reorganization process. Most of the experimental observations could be attributed to the variation in chain physical connectivity within the interfaces, which was explored by a self-consistent field lattice model (SCFLM). Sticker groups were found to segregate to the solid surface and chain physical connectivity was modified at the polymer-solid interface. We demonstrated that the segregation of sticker groups and the variation in chain physical connectivity led to the formation of a weak boundary interphase. It is this weak interphase that controls the strength and bonding dynamics of the interface. The presence of this weak interphase was experimentally verified through angle resolved XPS analysis. Finally, an entanglement percolation model was developed to correlate the interface strength with the chain connectivity. The results obtained in this study contribute 2. Critical Transport in Weakly Disordered Semiconductors and Semimetals NASA Astrophysics Data System (ADS) Syzranov, S. V.; Radzihovsky, L.; Gurarie, V. 2015-04-01 Motivated by Weyl semimetals and weakly doped semiconductors, we study transport in a weakly disordered semiconductor with a power-law quasiparticle dispersion ξk∝kα . We show, that in 2 α dimensions short-correlated disorder experiences logarithmic renormalization from all energies in the band. We study the case of a general dimension d using a renormalization group, controlled by an ɛ =2 α -d expansion. Above the critical dimensions, conduction exhibits a localization-delocalization phase transition or a sharp crossover (depending on the symmetries of the Hamiltonian) as a function of disorder strength. We utilize this analysis to compute the low-temperature conductivity in Weyl semimetals and weakly doped semiconductors near and below the critical disorder point. 3. Critical transport in weakly disordered semiconductors and semimetals. PubMed Syzranov, S V; Radzihovsky, L; Gurarie, V 2015-04-24 Motivated by Weyl semimetals and weakly doped semiconductors, we study transport in a weakly disordered semiconductor with a power-law quasiparticle dispersion ξ_{k}∝k^{α}. We show, that in 2α dimensions short-correlated disorder experiences logarithmic renormalization from all energies in the band. We study the case of a general dimension d using a renormalization group, controlled by an ϵ=2α-d expansion. Above the critical dimensions, conduction exhibits a localization-delocalization phase transition or a sharp crossover (depending on the symmetries of the Hamiltonian) as a function of disorder strength. We utilize this analysis to compute the low-temperature conductivity in Weyl semimetals and weakly doped semiconductors near and below the critical disorder point. PMID:25955065 4. Advantages of nonclassical pointer states in postselected weak measurements NASA Astrophysics Data System (ADS) Turek, Yusuf; Maimaiti, W.; Shikano, Yutaka; Sun, Chang-Pu; Al-Amri, M. 2015-08-01 We investigate, within the weak measurement theory, the advantages of nonclassical pointer states over semiclassical ones for coherent, squeezed vacuum, and Schrödinger cat states. These states are utilized as pointer states for the system operator A ̂ with property Â2=I ̂ , where I ̂ represents the identity operator. We calculate the ratio between the signal-to-noise ratio of nonpostselected and postselected weak measurements. The latter is used to find the quantum Fisher information for the above pointer states. The average shifts for those pointer states with arbitrary interaction strength are investigated in detail. One key result is that we find the postselected weak measurement scheme for nonclassical pointer states to be superior to semiclassical ones. This can improve the precision of the measurement process. 5. What Is a Strength? ERIC Educational Resources Information Center Wolin, Sybil 2003-01-01 As the strength-based perspective gains recognition, it is important to describe what constitutes strengths and to develop a specific vocabulary to name them. This article draws on resilience research to help identify specific competencies and areas of strengths in youth. (Contains 1 table.) 6. Strength Training for Girls. ERIC Educational Resources Information Center Connaughton, Daniel; Connaughton, Angela; Poor, Linda 2001-01-01 Strength training can be fun, safe, and appropriate for young girls and women and is an important component of any fitness program when combined with appropriate cardiovascular and flexibility activities. Concerns and misconceptions regarding girls' strength training are discussed, presenting general principles of strength training for children… 7. Competing weak localization and weak antilocalization in ultrathin topological insulators. PubMed Lang, Murong; He, Liang; Kou, Xufeng; Upadhyaya, Pramey; Fan, Yabin; Chu, Hao; Jiang, Ying; Bardarson, Jens H; Jiang, Wanjun; Choi, Eun Sang; Wang, Yong; Yeh, Nai-Chang; Moore, Joel; Wang, Kang L 2013-01-01 We demonstrate evidence of a surface gap opening in topological insulator (TI) thin films of (Bi(0.57)Sb(0.43))(2)Te(3) below six quintuple layers through transport and scanning tunneling spectroscopy measurements. By effective tuning the Fermi level via gate-voltage control, we unveil a striking competition between weak localization and weak antilocalization at low magnetic fields in nonmagnetic ultrathin films, possibly owing to the change of the net Berry phase. Furthermore, when the Fermi level is swept into the surface gap of ultrathin samples, the overall unitary behaviors are revealed at higher magnetic fields, which are in contrast to the pure WAL signals obtained in thicker films. Our findings show an exotic phenomenon characterizing the gapped TI surface states and point to the future realization of quantum spin Hall effect and dissipationless TI-based applications. 8. Strength Modeling Report NASA Technical Reports Server (NTRS) Badler, N. I.; Lee, P.; Wong, S. 1985-01-01 Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements. 9. Weak values and weak coupling maximizing the output of weak measurements SciTech Connect Di Lorenzo, Antonio 2014-06-15 In a weak measurement, the average output 〈o〉 of a probe that measures an observable A{sup -hat} of a quantum system undergoing both a preparation in a state ρ{sub i} and a postselection in a state E{sub f} is, to a good approximation, a function of the weak value A{sub w}=Tr[E{sub f}A{sup -hat} ρ{sub i}]/Tr[E{sub f}ρ{sub i}], a complex number. For a fixed coupling λ, when the overlap Tr[E{sub f}ρ{sub i}] is very small, A{sub w} diverges, but 〈o〉 stays finite, often tending to zero for symmetry reasons. This paper answers the questions: what is the weak value that maximizes the output for a fixed coupling? What is the coupling that maximizes the output for a fixed weak value? We derive equations for the optimal values of A{sub w} and λ, and provide the solutions. The results are independent of the dimensionality of the system, and they apply to a probe having a Hilbert space of arbitrary dimension. Using the Schrödinger–Robertson uncertainty relation, we demonstrate that, in an important case, the amplification 〈o〉 cannot exceed the initial uncertainty σ{sub o} in the observable o{sup -hat}, we provide an upper limit for the more general case, and a strategy to obtain 〈o〉≫σ{sub o}. - Highlights: •We have provided a general framework to find the extremal values of a weak measurement. •We have derived the location of the extremal values in terms of preparation and postselection. •We have devised a maximization strategy going beyond the limit of the Schrödinger–Robertson relation. 10. Peroneal muscle weakness in female basketballers following chronic ankle sprain. PubMed Rottigni, S A; Hopper, D 1991-01-01 Female A-grade basketballers were examined for invertor and evertor muscle strength. Two test groups participated. The injured group were players who had persisting disability following ankle sprains. The control group were players who had never sustained an ankle sprain. Test apparatus was the Orthotron isokinetic dynamometer at contraction speed of 180° per second. Trends towards higher invertor and evertor strength in uninjured group when compared with the injured group found in the present study have been supported by one other report. Invertors were found to be significantly stronger than evertors in both injured and uninjured groups, with the exception of the dominant leg of the uninjured group. A significant weakness in non-dominant evertors of the uninjured group was detected. Dominance did not significantly alter strength differences in the invertor or evertor muscle groups within the uninjured population. The clinical importance of strengthening the peroneal muscles in ankle sprain rehabilitation is discussed, and further research considerations provided. 11. Weak Coupling in 143Nd NASA Astrophysics Data System (ADS) Zhou, Xiao-Hong; E, Ideguchi; T, Kishida; M, Ishihara; H, Tsuchida; Y, Gono; T, Morikawa; M, Shibata; H, Watanabe; M, Miyake; T, Tsutsumi; S, Motomura; S, Mitarai 2000-04-01 The high-spin states of 143Nd have been studied in the 130Te(18O, 5n)143Nd reaction at a beam energy of 80 MeV using techniques of in-beam γ-ray spectroscopy. Measurements of γ - γ - t coincidences, γ-ray angular distributions, and γ-ray linear polarizations were performed. A level scheme of 143Nd with spin and parity assignments up to 53/2+ is proposed. While a weak coupling model can explain the level structure up to the Jπ=39/2- state, this model can not reproduce the higher-lying states. Additionally, a new low-lying non-yrast level sequence in 143Nd was observed in the present work, which can be well described by the weak coupling of an i13/2 neutron to the 142Nd core nucleus. 12. Overdamping by weakly coupled environments SciTech Connect Esposito, Massimiliano; Haake, Fritz 2005-12-15 A quantum system weakly interacting with a fast environment usually undergoes a relaxation with complex frequencies whose imaginary parts are damping rates quadratic in the coupling to the environment in accord with Fermi's 'golden rule'. We show for various models (spin damped by harmonic-oscillator or random-matrix baths, quantum diffusion, and quantum Brownian motion) that upon increasing the coupling up to a critical value still small enough to allow for weak-coupling Markovian master equations, a different relaxation regime can occur. In that regime, complex frequencies lose their real parts such that the process becomes overdamped. Our results call into question the standard belief that overdamping is exclusively a strong coupling feature. 13. Optimizing SNAP for Weak Lensing NASA Astrophysics Data System (ADS) High, F. W.; Ellis, R. S.; Massey, R. J.; Rhodes, J. D.; Lamoureux, J. I.; SNAP Collaboration 2004-12-01 The Supernova/Acceleration Probe (SNAP) satellite proposes to measure weak gravitational lensing in addition to type Ia supernovae. Its pixel scale has been set to 0.10 arcsec per pixel as established by the needs of supernova observations. To find the optimal pixel scale for accurate weak lensing measurements we conduct a tradeoff study in which, via simulations, we fix the suvey size in total pixels and vary the pixel scale. Our preliminary results show that with a smaller scale of about 0.08 arcsec per pixel we can minimize the contribution of intrinsic shear variance to the error on the power spectrum of mass density distortion. Currently we are testing the robustness of this figure as well as determining whether dithering yields analogous results. 14. Importance and challenges of measuring intrinsic foot muscle strength PubMed Central 2012-01-01 Background Intrinsic foot muscle weakness has been implicated in a range of foot deformities and disorders. However, to establish a relationship between intrinsic muscle weakness and foot pathology, an objective measure of intrinsic muscle strength is needed. The aim of this review was to provide an overview of the anatomy and role of intrinsic foot muscles, implications of intrinsic weakness and evaluate the different methods used to measure intrinsic foot muscle strength. Method Literature was sourced from database searches of MEDLINE, PubMed, SCOPUS, Cochrane Library, PEDro and CINAHL up to June 2012. Results There is no widely accepted method of measuring intrinsic foot muscle strength. Methods to estimate toe flexor muscle strength include the paper grip test, plantar pressure, toe dynamometry, and the intrinsic positive test. Hand-held dynamometry has excellent interrater and intrarater reliability and limits toe curling, which is an action hypothesised to activate extrinsic toe flexor muscles. However, it is unclear whether any method can actually isolate intrinsic muscle strength. Also most methods measure only toe flexor strength and other actions such as toe extension and abduction have not been adequately assessed. Indirect methods to investigate intrinsic muscle structure and performance include CT, ultrasonography, MRI, EMG, and muscle biopsy. Indirect methods often discriminate between intrinsic and extrinsic muscles, but lack the ability to measure muscle force. Conclusions There are many challenges to accurately measure intrinsic muscle strength in isolation. Most studies have measured toe flexor strength as a surrogate measure of intrinsic muscle strength. Hand-held dynamometry appears to be a promising method of estimating intrinsic muscle strength. However, the contribution of extrinsic muscles cannot be excluded from toe flexor strength measurement. Future research should clarify the relative contribution of intrinsic and extrinsic muscles 15. Prevalence of reduced muscle strength in older U.S. adults: United States, 2011-2012. PubMed Looker, Anne C; Wang, Chia-Yih 2015-01-01 Five percent of adults aged 60 and over had weak muscle strength and 13% had intermediate muscle strength, as defined by the new FNIH criteria. Weak muscle strength is clinically relevant because it is associated with slow gait speed, an important mobility impairment. It is also linked to an increased risk of death. The prevalence of reduced muscle strength increased with age and was higher in non-Hispanic Asian and Hispanic persons than in non-Hispanic white or non-Hispanic black persons. Decreasing muscle strength was linked with increased difficulty in rising from an armless chair, which is another important type of mobility impairment. PMID:25633238 16. The Strength of the Metal. Aluminum Oxide Interface NASA Technical Reports Server (NTRS) Pepper, S. V. 1984-01-01 The strength of the interface between metals and aluminum oxide is an important factor in the successful operation of devices found throughout modern technology. One finds the interface in machine tools, jet engines, and microelectronic integrated circuits. The strength of the interface, however, should be strong or weak depending on the application. The diverse technological demands have led to some general ideas concerning the origin of the interfacial strength, and have stimulated fundamental research on the problem. Present status of our understanding of the source of the strength of the metal - aluminum oxide interface in terms of interatomic bonds are reviewed. Some future directions for research are suggested. 17. Studies of fiber-matrix adhesion on compression strength NASA Technical Reports Server (NTRS) Bascom, Willard D.; Nairn, John A.; Boll, D. J. 1991-01-01 A study was initiated on the effect of the matrix polymer and the fiber matrix bond strength of carbon fiber polymer matrix composites. The work includes tests with micro-composites, single ply composites, laminates, and multi-axial loaded cylinders. The results obtained thus far indicate that weak fiber-matrix adhesion dramatically reduces 0 degree compression strength. Evidence is also presented that the flaws in the carbon fiber that govern compression strength differ from those that determine fiber tensile strength. Examination of post-failure damage in the single ply tests indicates kink banding at the crack tip. 18. Alumina fiber strength improvement NASA Technical Reports Server (NTRS) Pepper, R. T.; Nelson, D. C. 1982-01-01 The effective fiber strength of alumina fibers in an aluminum composite was increased to 173,000 psi. A high temperature heat treatment, combined with a glassy carbon surface coating, was used to prevent degradation and improve fiber tensile strength. Attempts to achieve chemical strengthening of the alumina fiber by chromium oxide and boron oxide coatings proved unsuccessful. A major problem encountered on the program was the low and inconsistent strength of the Dupont Fiber FP used for the investigation. 19. Detecting weakly interacting massive particles. NASA Astrophysics Data System (ADS) Drukier, A. K.; Gelmini, G. B. The growing synergy between astrophysics, particle physics, and low background experiments strengthens the possibility of detecting astrophysical non-baryonic matter. The idea of direct detection is that an incident, massive weakly interacting particle could collide with a nucleus and transfer an energy that could be measured. The present low levels of background achieved by the PNL/USC Ge detector represent a new technology which yields interesting bounds on Galactic cold dark matter and on light bosons emitted from the Sun. Further improvements require the development of cryogenic detectors. The authors analyse the practicality of such detectors, their optimalization and background suppression using the "annual modulation effect". 20. Weak lensing by galaxy troughs NASA Astrophysics Data System (ADS) Gruen, Daniel 2016-06-01 Galaxy troughs, i.e. underdensities in the projected galaxy field, are a weak lensing probe of the low density Universe with high signal-to-noise ratio. I present measurements of the radial distortion of background galaxy images and the de-magnification of the CMB by troughs constructed from Dark Energy Survey and Sloan Digital Sky Survey galaxy catalogs. With high statistical significance and a relatively robust modeling, these probe gravity in regimes of density and scale difficult to access for conventional statistics. 1. Weak quasielastic production of hyperons SciTech Connect Singh, S. K.; Vacas, M. J. Vicente 2006-09-01 The quasielastic weak production of {lambda} and {sigma} hyperons from nucleons and nuclei induced by antineutrinos is studied in the energy region of some ongoing neutrino oscillation experiments in the intermediate energy region. The hyperon-nucleon transition form factors determined from neutrino-nucleon scattering and an analysis of high precision data on semileptonic decays of neutron and hyperons using SU(3) symmetry have been used. The nuclear effects due to Fermi motion and final state interaction effects due to hyperon-nucleon scattering have also been studied. The numerical results for differential and total cross sections have been presented. 2. Tagged-weak {pi} method SciTech Connect Margaryan, A.; Hashimoto, O.; Kakoyan, V.; Knyazyan, S.; Tang, L. 2011-02-15 A new 'tagged-weak {pi} method' is proposed for determination of electromagnetic transition probabilities B(E2) and B(M1) of the hypernuclear states with lifetimes of {approx}10{sup -10} s. With this method, we are planning to measure B(E2) and B(M1) for light hypernuclei at JLab. The results of Monte Carlo simulations for the case of E2(5/2{sup +}, 3/2{sup +} {yields} 1/2{sup +}) transitions in {sub {Lambda}}{sup 7}He hypernuclei are presented. 3. Strength Training and Your Child MedlinePlus ... Story" 5 Things to Know About Zika & Pregnancy Strength Training and Your Child KidsHealth > For Parents > Strength Training ... help prevent injuries and speed up recovery. About Strength Training Strength training is the practice of using free ... 4. The weak scale from BBN NASA Astrophysics Data System (ADS) Hall, Lawrence J.; Pinner, David; Ruderman, Joshua T. 2014-12-01 The measured values of the weak scale, v, and the first generation masses, m u, d, e , are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of ( m u , m d , m e , v), catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of (1.4, 1.3, 2.5, ˜ 5), respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as v is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of m u, d, e that yield both hydrogen and heavy nuclei stability. 5. Weak antilocalisation in topological insulators NASA Astrophysics Data System (ADS) Bi, Xintao; Hankiewicz, Ewelina; Culcer, Dimitrie 2014-03-01 Topological insulators (TI) have changed our understanding of insulating behaviour. They are insulators in the bulk but conducting along their surfaces due to spin-orbit interaction. Much of the recent research focuses on overcoming the transport bottleneck, the fact that surface state transport is overwhelmed by bulk transport stemming from unintentional doping. The key to overcoming this bottleneck is identifying unambiguous signatures of surface state transport. This talk will discuss one such signature, which is manifest in the coherent backscattering of electrons. Due to strong spin-orbit coupling in TI one expects to observe weak antilocalisation rather than weak localisation, meaning that coherent backscattering increases the electrical conductivity. The features of this effect, however, are rather subtle, because in TI the impurities have strong spin-orbit coupling as well. I will show that spin-orbit coupled impurities introduce an additional time scale, which is expected to be shorter than the dephasing time, and the resulting conductivity has a logarithmic dependence on the carrier density, a behaviour hitherto unknown in 2D electron systems. The result we predict is observable experimentally and would provide a smoking gun test of surface transport. 6. Weak lensing and cosmological investigation NASA Astrophysics Data System (ADS) Acquaviva, Viviana 2005-03-01 In the last few years the scientific community has been dealing with the challenging issue of identifying the dark energy component. We regard weak gravitational lensing as a brand new, and extremely important, tool for cosmological investigation in this field. In fact, the features imprinted on the Cosmic Microwave Background radiation by the lensing from the intervening distribution of matter represent a pretty unbiased estimator, and can thus be used for putting constraints on different dark energy models. This is true in particular for the magnetic-type B-modes of CMB polarization, whose unlensed spectrum at large multipoles (l ~= 1000) is very small even in presence of an amount of gravitational waves as large as currently allowed by the experiments: therefore, on these scales the lensing phenomenon is the only responsible for the observed power, and this signal turns out to be a faithful tracer of the dark energy dynamics. We first recall the formal apparatus of the weak lensing in extended theories of gravity, introducing the physical observables suitable to cast the bridge between lensing and cosmology, and then evaluate the amplitude of the expected effect in the particular case of a Non-Minimally-Coupled model, featuring a quadratic coupling between quintessence and Ricci scalar. 7. Building on Our Strengths. ERIC Educational Resources Information Center Hill, Robert 1978-01-01 Comments on the feeling that the American family is disintegrating, and that many criticisms traditionally made about Black families are now made about White families. Suggests that people need to stress family strengths. As an example, five major strengths of Black families are described: flexibility, work and achievement ethics, religiosity, and… 8. Strengths of Remarried Families. ERIC Educational Resources Information Center Knaub, Patricia Kain; And Others 1984-01-01 Focuses on remarried families' (N=80) perceptions of family strengths, marital satisfaction, and adjustment to the remarried situation. Results indicated that although most would like to make some changes, scores on the measurements used were high. A supportive environment was the most important predictor of family strength and success. (JAC) 9. Tips from the toolkit: 2--assessing organisational strengths. PubMed Steer, Neville 2010-03-01 'SWOT' is a familiar term used in the development of business strategy. It is based on the identification of strengths, weaknesses, opportunities and threats as part of a strategic analysis approach. While there are a range of more sophisticated models for analysing and developing business strategy, it is a useful model for general practice as it is less time consuming than other approaches. The following article discusses some ways to apply this framework to assess organisational strengths (and weaknesses). It is based on The Royal Australian College of General Practitioners' "General practice management toolkit". 10. Strengths of serpentinite gouges at elevated temperatures USGS Publications Warehouse Moore, Diane E.; Lockner, D.A.; Ma, S.; Summers, R.; Byerlee, J.D. 1997-01-01 Serpentinite has been proposed as a cause of both low strength and aseismic creep of fault zones. To test these hypotheses, we have measured the strength of chrysotile-, lizardite-, and antigorite-rich serpentinite gouges under hydrothermal conditions, with emphasis on chrysotile, which has thus far received little attention. At 25??C, the coefficient of friction, ??, of chrysotile gouge is roughly 0.2, whereas the lizardite- and antigorite-rich gouges are at least twice as strong. The very low room temperature strength of chrysotile is a consequence of its unusually high adsorbed water content. When the adsorbed water is removed, chrysotile is as strong as pure antigorite gouge at room temperature. Heating to ???200??C causes the frictional strengths of all three gouges to increase. Limited data suggest that different polytypes of a given serpentine mineral have similar strengths; thus deformation-induced changes in polytype should not affect fault strength. At 25??C, the chrysotile gouge has a transition from velocity strengthening at low velocities to velocity weakening at high velocities, consistent with previous studies. At temperatures up to ???200??C, however, chrysotile strength is essentially independent of velocity at low velocities. Overall, chrysotile has a restricted range of velocity-strengthening behavior that migrates to higher velocities with increasing temperature. Less information on velocity dependence is available for the lizardite and antigorite gouges, but their behavior is consistent with that outlined for chrysotile. The marked changes in velocity dependence and strength of chrysotile with heating underscore the hazards of using room temperature data to predict fault behavior at depth. The velocity behavior at elevated temperatures does not rule out serpentinite as a cause of aseismic slip, but in the presence of a hydrostatic fluid pressure gradient, all varieties of serpentine are too strong to explain the apparent weakness of faults such 11. Revealing geometric phases in modular and weak values with a quantum eraser NASA Astrophysics Data System (ADS) Cormann, Mirko; Remy, Mathilde; Kolaric, Branko; Caudano, Yves 2016-04-01 We present a procedure to completely determine the complex modular values of arbitrary observables of pre- and postselected ensembles, which works experimentally for all measurement strengths and all postselected states. This procedure allows us to discuss the physics of modular and weak values in interferometric experiments involving a qubit meter. We determine both the modulus and the argument of the modular value for any measurement strength in a single step, by simultaneously controlling the visibility and the phase in a quantum eraser interference experiment. Modular and weak values are closely related. Using entangled qubits for the probed and meter systems, we show that the phase of the modular and weak values has a topological origin. This phase is completely defined by the intrinsic physical properties of the probed system and its time evolution. The physical significance of this phase can thus be used to evaluate the quantumness of weak values. 12. Dynamic strength of molecular adhesion bonds. PubMed Central Evans, E; Ritchie, K 1997-01-01 In biology, molecular linkages at, within, and beneath cell interfaces arise mainly from weak noncovalent interactions. These bonds will fail under any level of pulling force if held for sufficient time. Thus, when tested with ultrasensitive force probes, we expect cohesive material strength and strength of adhesion at interfaces to be time- and loading rate-dependent properties. To examine what can be learned from measurements of bond strength, we have extended Kramers' theory for reaction kinetics in liquids to bond dissociation under force and tested the predictions by smart Monte Carlo (Brownian dynamics) simulations of bond rupture. By definition, bond strength is the force that produces the most frequent failure in repeated tests of breakage, i.e., the peak in the distribution of rupture forces. As verified by the simulations, theory shows that bond strength progresses through three dynamic regimes of loading rate. First, bond strength emerges at a critical rate of loading (> or = 0) at which spontaneous dissociation is just frequent enough to keep the distribution peak at zero force. In the slow-loading regime immediately above the critical rate, strength grows as a weak power of loading rate and reflects initial coupling of force to the bonding potential. At higher rates, there is crossover to a fast regime in which strength continues to increase as the logarithm of the loading rate over many decades independent of the type of attraction. Finally, at ultrafast loading rates approaching the domain of molecular dynamics simulations, the bonding potential is quickly overwhelmed by the rapidly increasing force, so that only naked frictional drag on the structure remains to retard separation. Hence, to expose the energy landscape that governs bond strength, molecular adhesion forces must be examined over an enormous span of time scales. However, a significant gap exists between the time domain of force measurements in the laboratory and the extremely fast scale 13. Lizard locomotion on weak sand NASA Astrophysics Data System (ADS) Goldman, Daniel 2005-03-01 Terrestrial animal locomotion in the natural world can involve complex foot-ground interaction; for example, running on sand probes the solid and fluid behaviors of the medium. We study locomotion of desert-dwelling lizard Callisaurus draconoides (length 16 cm, mass=20 g) during rapid running on sand. To explore the role of foot-ground interaction on locomotion, we study the impact of flat disks ( 2 cm diameter, 10 grams) into a deep (800 particle diameters) bed of 250 μm glass spheres of fixed volume fraction φ 0.59, and use a vertical flow of air (a fluidized bed) to change the material properties of the medium. A constant flow Q below the onset of bed fluidization weakens the solid: at fixed φ the penetration depth and time of a disk increases with increasing Q. We measure the average speed, foot impact depth, and foot contact time as a function of material strength. The animal maintains constant penetration time (30 msec) and high speed (1.4 m/sec) even when foot penetration depth varies as we manipulate material strength. The animals compensate for decreasing propulsion by increasing stride frequency. 14. 7 CFR 51.894 - Weak. Code of Federal Regulations, 2010 CFR 2010-01-01 ... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Standards for Grades of Table Grapes (European or Vinifera Type) 1 Definitions § 51.894 Weak. Weak means... content, inferior flavor, or are of poor keeping quality.... 15. 7 CFR 51.894 - Weak. Code of Federal Regulations, 2011 CFR 2011-01-01 ... Standards for Grades of Table Grapes (European or Vinifera Type) 1 Definitions § 51.894 Weak. Weak means that individual berries are somewhat translucent, watery and soft, may have relatively low... 16. 7 CFR 51.894 - Weak. Code of Federal Regulations, 2012 CFR 2012-01-01 ... Standards for Grades of Table Grapes (European or Vinifera Type) 1 Definitions § 51.894 Weak. Weak means that individual berries are somewhat translucent, watery and soft, may have relatively low... 17. Migrant networks and international migration: testing weak ties. PubMed Liu, Mao-Mei 2013-08-01 This article examines the role of migrant social networks in international migration and extends prior research by testing the strength of tie theory, decomposing networks by sources and resources, and disentangling network effects from complementary explanations. Nearly all previous empirical research has ignored friendship ties and has largely neglected extended-family ties. Using longitudinal data from the Migration between Africa and Europe project collected in Africa (Senegal) and Europe (France, Italy, and Spain), this article tests the robustness of network theory-and in particular, the role of weak ties-on first-time migration between Senegal and Europe. Discrete-time hazard model results confirm that weak ties are important and that network influences appear to be gendered, but they do not uphold the contention in previous literature that strong ties are more important than weak ties for male and female migration. Indeed, weak ties play an especially important role in male migration. In terms of network resources, having more resources as a result of strong ties appears to dampen overall migration, while having more resources as a result of weaker ties appears to stimulate male migration. Finally, the diversity of resources has varied effects for male and female migration. 18. Breaking through barriers: using technology to address executive function weaknesses and improve student achievement. PubMed Schwartz, David M 2014-01-01 Assistive technologies provide significant capabilities for improving student achievement. Improved accessibility, cost, and diversity of applications make integration of technology a powerful tool to compensate for executive function weaknesses and deficits and their impact on student performance, learning, and achievement. These tools can be used to compensate for decreased working memory, poor time management, poor planning and organization, poor initiation, and decreased memory. Assistive technology provides mechanisms to assist students with diverse strengths and weaknesses in mastering core curricular concepts. PMID:25010083 19. Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere PubMed Central Zhang, T. L.; Baumjohann, W.; Russell, C. T.; Luhmann, J. G.; Xiao, S. D. 2016-01-01 The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth’s twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander. PMID:27009234 20. Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere NASA Astrophysics Data System (ADS) Zhang, Tielong; Baumjohann, Wolfgang; Russell, Christopher; Luhmann, Janet 2016-04-01 The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth's twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander. 1. Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere NASA Astrophysics Data System (ADS) Zhang, T. L.; Baumjohann, W.; Russell, C. T.; Luhmann, J. G.; Xiao, S. D. 2016-03-01 The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth’s twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander. 2. Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere. PubMed Zhang, T L; Baumjohann, W; Russell, C T; Luhmann, J G; Xiao, S D 2016-01-01 The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth's twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander. PMID:27009234 3. Weak, Quiet Magnetic Fields Seen in the Venus Atmosphere. PubMed Zhang, T L; Baumjohann, W; Russell, C T; Luhmann, J G; Xiao, S D 2016-03-24 The existence of a strong internal magnetic field allows probing of the interior through both long term changes of and short period fluctuations in that magnetic field. Venus, while Earth's twin in many ways, lacks such a strong intrinsic magnetic field, but perhaps short period fluctuations can still be used to probe the electrical conductivity of the interior. Toward the end of the Venus Express mission, an aerobraking campaign took the spacecraft below the ionosphere into the very weakly electrically conducting atmosphere. As the spacecraft descended from 150 to 140 km altitude, the magnetic field became weaker on average and less noisy. Below 140 km, the median field strength became steady but the short period fluctuations continued to weaken. The weakness of the fluctuations indicates they might not be useful for electromagnetic sounding of the atmosphere from a high altitude platform such as a plane or balloon, but possibly could be attempted on a lander. 4. Deterministic weak localization in periodic structures. PubMed Tian, C; Larkin, A 2005-12-01 In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals. 5. Crew Strength Training NASA Video Gallery Train to develop your upper and lower body strength in your muscles and bones by performing body-weight squats and push-ups.The Train Like an Astronaut project uses the excitement of exploration to... 6. Developing Strengths in Families ERIC Educational Resources Information Center Bowman, Ted 1976-01-01 There are few descriptions of growth experiences for total families. This paper describes one such model. It expresses the conviction that families need opportunities to come together with other families to identify strengths, sharpen communication skills, and establish goals. (Author) 7. Protecting weak measurements against systematic errors NASA Astrophysics Data System (ADS) Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N. 2016-07-01 In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain. 8. Fast Learning with Weak Synaptic Plasticity. PubMed Yger, Pierre; Stimberg, Marcel; Brette, Romain 2015-09-30 New sensory stimuli can be learned with a single or a few presentations. Similarly, the responses of cortical neurons to a stimulus have been shown to increase reliably after just a few repetitions. Long-term memory is thought to be mediated by synaptic plasticity, but in vitro experiments in cortical cells typically show very small changes in synaptic strength after a pair of presynaptic and postsynaptic spikes. Thus, it is traditionally thought that fast learning requires stronger synaptic changes, possibly because of neuromodulation. Here we show theoretically that weak synaptic plasticity can, in fact, support fast learning, because of the large number of synapses N onto a cortical neuron. In the fluctuation-driven regime characteristic of cortical neurons in vivo, the size of membrane potential fluctuations grows only as √N, whereas a single output spike leads to potentiation of a number of synapses proportional to N. Therefore, the relative effect of a single spike on synaptic potentiation grows as √N. This leverage effect requires precise spike timing. Thus, the large number of synapses onto cortical neurons allows fast learning with very small synaptic changes. Significance statement: Long-term memory is thought to rely on the strengthening of coactive synapses. This physiological mechanism is generally considered to be very gradual, and yet new sensory stimuli can be learned with just a few presentations. Here we show theoretically that this apparent paradox can be solved when there is a tight balance between excitatory and inhibitory input. In this case, small synaptic modifications applied to the many synapses onto a given neuron disrupt that balance and produce a large effect even for modifications induced by a single stimulus. This effect makes fast learning possible with small synaptic changes and reconciles physiological and behavioral observations. 9. A Universe without Weak Interactions SciTech Connect Harnik, Roni; Kribs, Graham D.; Perez, Gilad 2006-04-07 A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe. 10. Weak Turbulence in Radiation Belts NASA Astrophysics Data System (ADS) Ganguli, Gurudas; Crabtree, Chris; Rudakov, Leonid 2015-11-01 Weak turbulence plays a significant role in space plasma dynamics. Induced nonlinear scattering dominates the evolution in the low-beta isothermal radiation belt plasmas and affects the propagation characteristics of waves. As whistler waves propagate away from the earth they are scattered in the magnetosphere such that their trajectories are turned earthward where they are reflected back towards the magnetosphere. Repeated scattering and reflection of the whistlers establishes a cavity in which the wave energy can be maintained for a long duration with, on average, a smaller wave-normal angle. Consequently, the cyclotron resonance time for the trapped energetic electrons increases, leading to an enhanced pitch-angle scattering rate. Enhanced pitch-angle scattering lowers the lifetime of the energetic electron population. Also, pitch-angle scattering of the trapped population in the cavity with a loss cone distribution amplifies the whistler waves, which in turn promotes a more rapid precipitation through a positive feedback mechanism. Typical storm-pumped radiation belt parameters and laboratory experiments will be used to elucidate this phenomenon Work supported by NRL Base Funds. 11. Weak Turbulence in Radiation Belts NASA Astrophysics Data System (ADS) Ganguli, G.; Crabtree, C. E.; Rudakov, L. 2015-12-01 Weak turbulence plays a significant role in space plasma dynamics. Induced nonlinear scattering dominates the evolution in the low-beta isothermal radiation belt plasmas and affects the propagation characteristics of waves. As whistler waves propagate away from the earth they are scattered in the magnetosphere such that their trajectories are turned earthward where they are reflected back towards the magnetosphere. Repeated scattering and reflection of the whistlers establishes a cavity in which the wave energy can be maintained for a long duration with, on average, a smaller wave-normal angle. Consequently, the cyclotron resonance time for the trapped energetic electrons increases, leading to an enhanced pitch-angle scattering rate. Enhanced pitch-angle scattering lowers the lifetime of the energetic electron population. Also, pitch-angle scattering of the trapped population in the cavity with a loss cone distribution amplifies the whistler waves, which in turn promotes a more rapid precipitation through a positive feedback mechanism. Typical storm-pumped radiation belt parameters and laboratory experiments will be used to elucidate this phenomenon. 12. Violation of the Leggett–Garg inequality with weak measurements of photons PubMed Central Goggin, M. E.; Almeida, M. P.; Barbieri, M.; Lanyon, B. P.; O’Brien, J. L.; White, A. G.; Pryde, G. J. 2011-01-01 By weakly measuring the polarization of a photon between two strong polarization measurements, we experimentally investigate the correlation between the appearance of anomalous values in quantum weak measurements and the violation of realism and nonintrusiveness of measurements. A quantitative formulation of the latter concept is expressed in terms of a Leggett–Garg inequality for the outcomes of subsequent measurements of an individual quantum system. We experimentally violate the Leggett–Garg inequality for several measurement strengths. Furthermore, we experimentally demonstrate that there is a one-to-one correlation between achieving strange weak values and violating the Leggett–Garg inequality. PMID:21220296 13. Adapted Resistance Training Improves Strength in Eight Weeks in Individuals with Multiple Sclerosis. PubMed Keller, Jennifer L; Fritz, Nora; Chiang, Chen Chun; Jiang, Allen; Thompson, Tziporah; Cornet, Nicole; Newsome, Scott D; Calabresi, Peter A; Zackowski, Kathleen 2016-01-01 Hip weakness is a common symptom affecting walking ability in people with multiple sclerosis (MS). It is known that resistance strength training (RST) can improve strength in individuals with MS, however; it remains unclear the duration of RST that is needed to make strength gains and how to adapt hip strengthening exercises for individuals of varying strength using only resistance bands. This paper describes the methodology to set up and implement an adapted resistance strength training program, using resistance bands, for individuals with MS. Directions for pre- and post-strength tests to evaluate efficacy of the strength-training program are included. Safety features and detailed instructions outline the weekly program content and progression. Current evidence is presented showing that significant strength gains can be made within 8 weeks of starting a RST program. Evidence is also presented showing that resistance strength training can be successfully adapted for individuals with MS of varying strength with little equipment. PMID:26863451 14. Distribution and severity of weakness among patients with polymyositis, dermatomyositis and juvenile dermatomyositis PubMed Central Harris-Love, M. O.; Shrader, J. A.; Koziol, D.; Pahlajani, N.; Jain, M.; Smith, M.; Cintas, H. L.; McGarvey, C. L.; James-Newton, L.; Pokrovnichka, A.; Moini, B.; Cabalar, I.; Lovell, D. J.; Wesley, R.; Plotz, P. H.; Miller, F. W.; Hicks, J. E. 2009-01-01 Objective. To describe the distribution and severity of muscle weakness using manual muscle testing (MMT) in 172 patients with PM, DM and juvenile DM (JDM). The secondary objectives included characterizing individual muscle group weakness and determining associations of weakness with functional status and myositis characteristics in this large cohort of patients with myositis. Methods. Strength was assessed for 13 muscle groups using the 10-point MMT and expressed as a total score, subscores based on functional and anatomical regions, and grades for individual muscle groups. Patient characteristics and secondary outcomes, such as clinical course, muscle enzymes, corticosteroid dosage and functional status were evaluated for association with strength using univariate and multivariate analyses. Results. A gradient of proximal weakness was seen, with PM weakest, DM intermediate and JDM strongest among the three myositis clinical groups (P ≤ 0.05). Hip flexors, hip extensors, hip abductors, neck flexors and shoulder abductors were the muscle groups with the greatest weakness among all three clinical groups. Muscle groups were affected symmetrically. Conclusions. Axial and proximal muscle impairment was reflected in the five weakest muscles shared by our cohort of myositis patients. However, differences in the pattern of weakness were observed among all three clinical groups. Our findings suggest a greater severity of proximal weakness in PM in comparison with DM. PMID:19074186 15. Tie strength distribution in scientific collaboration networks NASA Astrophysics Data System (ADS) Ke, Qing; Ahn, Yong-Yeol 2014-09-01 Science is increasingly dominated by teams. Understanding patterns of scientific collaboration and their impacts on the productivity and evolution of disciplines is crucial to understand scientific processes. Electronic bibliography offers a unique opportunity to map and investigate the nature of scientific collaboration. Recent studies have demonstrated a counterintuitive organizational pattern of scientific collaboration networks: densely interconnected local clusters consist of weak ties, whereas strong ties play the role of connecting different clusters. This pattern contrasts itself from many other types of networks where strong ties form communities while weak ties connect different communities. Although there are many models for collaboration networks, no model reproduces this pattern. In this paper, we present an evolution model of collaboration networks, which reproduces many properties of real-world collaboration networks, including the organization of tie strengths, skewed degree and weight distribution, high clustering, and assortative mixing. 16. 21 CFR 524.1465 - Mupirocin. Code of Federal Regulations, 2010 CFR 2010-04-01 ... infections of the skin, including superficial pyoderma, caused by susceptible strains of Staphylococcus aureus and S. intermedius. (3) Limitations. Federal law restricts this drug to use by or on the order... 17. Pixelation Effects in Weak Lensing NASA Astrophysics Data System (ADS) High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard 2007-11-01 Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, & Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09" for a 0.14" FWHM point-spread function (PSF). The pixel scale could be increased to ~0.16" if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape 18. Weak {}^* convergence of operator means NASA Astrophysics Data System (ADS) Romanov, Alexandr V. 2011-12-01 For a linear operator U with \\Vert U^n\\Vert \\le \\operatorname{const} on a Banach space X we discuss conditions for the convergence of ergodic operator nets T_\\alpha corresponding to the adjoint operator U^* of U in the {W^*O}-topology of the space \\operatorname{End} X^*. The accumulation points of all possible nets of this kind form a compact convex set L in \\operatorname{End} X^*, which is the kernel of the operator semigroup G=\\overline{\\operatorname{co}}\\,\\Gamma_0, where \\Gamma_0=\\{U_n^*, n \\ge 0\\}. It is proved that all ergodic nets T_\\alpha weakly {}^* converge if and only if the kernel L consists of a single element. In the case of X=C(\\Omega) and the shift operator U generated by a continuous transformation \\varphi of a metrizable compactum \\Omega we trace the relationships among the ergodic properties of U, the structure of the operator semigroups L, G and \\Gamma=\\overline{\\Gamma}_0, and the dynamical characteristics of the semi-cascade (\\varphi,\\Omega). In particular, if \\operatorname{card}L=1, then a) for any \\omega \\in\\Omega the closure of the trajectory \\{\\varphi^n\\omega, n \\ge 0\\} contains precisely one minimal set m, and b) the restriction (\\varphi,m) is strictly ergodic. Condition a) implies the {W^*O}-convergence of any ergodic sequence of operators T_n \\in \\operatorname{End} X^* under the additional assumption that the kernel of the enveloping semigroup E(\\varphi,\\Omega) contains elements obtained from the basis' family of transformations \\{\\varphi^n, n \\ge 0\\} of the compact set \\Omega by using some transfinite sequence of sequential passages to the limit. 19. The Composite Direct Product Model: Strengths, Limitations, and New Approaches. ERIC Educational Resources Information Center Marsh, Herbert W.; Grayson, David The multitrait-multimethod (MTMM) paradigm is used widely to assess construct validity, but D. A. Kenny and D. A. Kashy (1992) lament that even after 30 years we still do not know how to analyze MTMM data. The Composite Direct Product (CDP) model has recently attracted considerable attention. Its strengths and weaknesses are evaluated, and a… 20. Physical adsorption strength in open systems. PubMed Knippenberg, M Todd; Stuart, Steven J; Cooper, Alan C; Pez, G P; Cheng, Hansong 2006-11-23 For a physical adsorption system, the distances of adsorbates from the surface of a substrate can vary significantly, depending on particle loading and interatomic interactions. Although the total adsorption energy is quantified easily, the normalized, per-particle adsorption energies are more ambiguous if some of these particles are far away from the surface and are interacting only weakly with the substrate. A simple analytical procedure is proposed to characterize the distance dependence of the physisorption strength and effective adsorption capacity. As an example, the method is utilized to describe H2 physisorption in a finite bundle of single-walled carbon nanotubes. PMID:17107125 1. Tongue weakness and somatosensory disturbance following oral endotracheal extubation. PubMed Su, Han; Hsiao, Tzu-Yu; Ku, Shih-Chi; Wang, Tyng-Guey; Lee, Jang-Jaer; Tzeng, Wen-Chii; Huang, Guan-Hua; Chen, Cheryl Chia-Hui 2015-04-01 The tongue plays important roles in mastication, swallowing, and speech, but its sensorimotor function might be affected by endotracheal intubation. The objective of this pilot study was to describe disturbances in the sensorimotor functions of the tongue over 14 days following oral endotracheal extubation. We examined 30 post-extubated patients who had prolonged (≥48 h) oral endotracheal intubation from six medical intensive care units. Another 36 patients were recruited and examined from dental and geriatric outpatient clinics served as a comparison group. Tongue strength was measured by the Iowa Oral Performance Instrument. Sensory disturbance of the tongue was measured by evaluating light touch sensation, oral stereognosis, and two-point discrimination with standardized protocols. Measurements were taken at three time points (within 48 h, and 7 and 14 days post-extubation) for patients with oral intubation but only once for the comparison group. The results show that independent of age, gender, tobacco used, and comorbidities, tongue strength was lower and its sensory functions were more impaired in patients who had oral intubation than in the comparison group. Sensory disturbances of the tongue gradually recovered, taking 14 days to be comparable with the comparison group, while weakness of the tongue persisted. In conclusion, patients with oral endotracheal intubation had weakness and somatosensory disturbances of the tongue lasting at least 14 days from extubation but whether is caused by intubation and whether is contributed to postextubation dysphagia should be further investigated. 2. High strength alloys DOEpatents Maziasz, Phillip James [Oak Ridge, TN; Shingledecker, John Paul [Knoxville, TN; Santella, Michael Leonard [Knoxville, TN; Schneibel, Joachim Hugo [Knoxville, TN; Sikka, Vinod Kumar [Oak Ridge, TN; Vinegar, Harold J [Bellaire, TX; John, Randy Carl [Houston, TX; Kim, Dong Sub [Sugar Land, TX 2010-08-31 High strength metal alloys are described herein. At least one composition of a metal alloy includes chromium, nickel, copper, manganese, silicon, niobium, tungsten and iron. System, methods, and heaters that include the high strength metal alloys are described herein. At least one heater system may include a canister at least partially made from material containing at least one of the metal alloys. At least one system for heating a subterranean formation may include a tubular that is at least partially made from a material containing at least one of the metal alloys. 3. High strength alloys DOEpatents Maziasz, Phillip James; Shingledecker, John Paul; Santella, Michael Leonard; Schneibel, Joachim Hugo; Sikka, Vinod Kumar; Vinegar, Harold J.; John, Randy Carl; Kim, Dong Sub 2012-06-05 High strength metal alloys are described herein. At least one composition of a metal alloy includes chromium, nickel, copper, manganese, silicon, niobium, tungsten and iron. System, methods, and heaters that include the high strength metal alloys are described herein. At least one heater system may include a canister at least partially made from material containing at least one of the metal alloys. At least one system for heating a subterranean formation may include a tublar that is at least partially made from a material containing at least one of the metal alloys. 4. Spin resonance strength calculations SciTech Connect Courant,E.D. 2008-10-06 In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it. 5. Instrumental systematics and weak gravitational lensing NASA Astrophysics Data System (ADS) Mandelbaum, R. 2015-05-01 We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements. 6. Managing yourself. Stop overdoing your strengths. PubMed Kaplan, Robert E; Kaiser, Robert B 2009-02-01 Although most managers can recognize an off-kilter leader (consider the highly supportive boss who cuts people too much slack), it's quite difficult to see overkill in yourself. Unfortunately, that's where leadership development tools such as 360-degree surveys fail to deliver, say Kaplan and Kaiser. Dividing qualities into "strengths" and "weaknesses" and rating them on a five-point scale will not account for strengths overplayed. The authors suggest several strategies, based on their years of consulting experience and research, for figuring out which attributes you've employed to excess and adjusting your behavior accordingly. Strengths taken too far have two consequences: First, they become weaknesses. For instance, quick-wittedness can turn into impatience with others. Second, you're at risk of becoming extremely lopsided--that is, diminishing your capacity on the opposite pole. A leader who is very good at building consensus, for example, may take too long to move into action. To strike a balance between two key leadership dualities--forceful versus enabling, and strategic versus operational--you need to see your actions and motivations clearly. That's no easy task since most leadership development tools don't spell out that you're overdoing your strengths. But there are other ways to bring that information to light. You can start with a review of the highest ratings on your most recent 360 report. Ask yourself: Is this too much of a good thing? Another technique is to make a list of the traits you most want to have as a leader. Are you going to extremes with any of them? To check for lopsidedness, you can prompt feedback from other people with a list of qualities you've composed or one you've gleaned from other sources. Once you know which attributes you're overdoing, you can recalibrate. PMID:19266705 7. Managing yourself. Stop overdoing your strengths. PubMed Kaplan, Robert E; Kaiser, Robert B 2009-02-01 Although most managers can recognize an off-kilter leader (consider the highly supportive boss who cuts people too much slack), it's quite difficult to see overkill in yourself. Unfortunately, that's where leadership development tools such as 360-degree surveys fail to deliver, say Kaplan and Kaiser. Dividing qualities into "strengths" and "weaknesses" and rating them on a five-point scale will not account for strengths overplayed. The authors suggest several strategies, based on their years of consulting experience and research, for figuring out which attributes you've employed to excess and adjusting your behavior accordingly. Strengths taken too far have two consequences: First, they become weaknesses. For instance, quick-wittedness can turn into impatience with others. Second, you're at risk of becoming extremely lopsided--that is, diminishing your capacity on the opposite pole. A leader who is very good at building consensus, for example, may take too long to move into action. To strike a balance between two key leadership dualities--forceful versus enabling, and strategic versus operational--you need to see your actions and motivations clearly. That's no easy task since most leadership development tools don't spell out that you're overdoing your strengths. But there are other ways to bring that information to light. You can start with a review of the highest ratings on your most recent 360 report. Ask yourself: Is this too much of a good thing? Another technique is to make a list of the traits you most want to have as a leader. Are you going to extremes with any of them? To check for lopsidedness, you can prompt feedback from other people with a list of qualities you've composed or one you've gleaned from other sources. Once you know which attributes you're overdoing, you can recalibrate. 8. Functional organization of excitatory synaptic strength in primary visual cortex PubMed Central Muir, Dylan R.; Houlton, Rachael; Sader, Elie N.; Ko, Ho; Hofer, Sonja B.; Mrsic-Flogel, Thomas D. 2016-01-01 The strength of synaptic connections fundamentally determines how neurons influence each other’s firing. Excitatory connection amplitudes between pairs of cortical neurons vary over two orders of magnitude, comprising only very few strong connections among many weaker ones1–9. Although this highly skewed distribution of connection strengths is observed in diverse cortical areas1–9, its functional significance remains unknown: it is not clear how connection strength relates to neuronal response properties, nor how strong and weak inputs contribute to information processing in local microcircuits. Here we reveal that the strength of connections between layer 2/3 (L2/3) pyramidal neurons in mouse primary visual cortex (V1) obeys a simple rule—the few strong connections occur between neurons with most correlated responses, while only weak connections link neurons with uncorrelated responses. Moreover, we show that strong and reciprocal connections occur between cells with similar spatial receptive field structure. Although weak connections far outnumber strong connections, each neuron receives the majority of its local excitation from a small number of strong inputs provided by the few neurons with similar responses to visual features. By dominating recurrent excitation, these infrequent yet powerful inputs disproportionately contribute to feature preference and selectivity. Therefore, our results show that the apparently complex organization of excitatory connection strength reflects the similarity of neuronal responses, and suggest that rare, strong connections mediate stimulus-specific response amplification in cortical microcircuits. PMID:25652823 9. Interaction of a weak discontinuity with elementary waves of Riemann problema) NASA Astrophysics Data System (ADS) Radha, R.; Sharma, V. D. 2012-01-01 We study the interaction of a weak discontinuity wave with the elementary waves of the Riemann problem for the one-dimensional Euler equations governing the flow of ideal polytropic gases, and investigate the effects of initial states, and the shock strength on the jumps in shock acceleration and the reflected and transmitted waves. 10. Effect of bolting on roadway support in extremely weak rock. PubMed Li, Qinghai; Shi, Weiping; Qin, Zhongcheng 2016-01-01 In mine roadway support operations, floor bolting not only played a role in floor heave control, but also in reinforcing roof and its two sides. Correspondingly, bolting of roof and two sides also played a part in floor heave control. To quantify the effect of such bolting, based on roadway support in extremely weak rock, three physical models were produced and tested in laboratory. Through comparison of their displacements in three physical simulation experiments, the reinforcing effect of bolting in extremely weak rock roadways was quantified. Reinforcing coefficients was defined as displacement ratio between original support and new support regime. Results indicated that the reinforcing coefficients, for bolting of roof and its two sides, on floor, two sides, and roof reached 2.18, 3.56, and 1.81 respectively. The reinforcing coefficients for floor bolting on floor, two sides, and roof reached 3.06, 2.34, and 1.39 respectively. So in this extremely weak rock, the surrounding rock should be considered as an integral structure in any support operation: this allows for better local strength improvement, and provided future design guidance. 11. Asymmetric Magnetic Reconnection in Weakly Ionized Chromospheric Plasmas NASA Astrophysics Data System (ADS) Murphy, Nicholas A.; Lukin, Vyacheslav S. 2015-06-01 Realistic models of magnetic reconnection in the solar chromosphere must take into account that the plasma is partially ionized and that plasma conditions within any two magnetic flux bundles undergoing reconnection may not be the same. Asymmetric reconnection in the chromosphere may occur when newly emerged flux interacts with pre-existing, overlying flux. We present 2.5D simulations of asymmetric reconnection in weakly ionized, reacting plasmas where the magnetic field strengths, ion and neutral densities, and temperatures are different in each upstream region. The plasma and neutral components are evolved separately to allow non-equilibrium ionization. As in previous simulations of chromospheric reconnection, the current sheet thins to the scale of the neutral–ion mean free path and the ion and neutral outflows are strongly coupled. However, the ion and neutral inflows are asymmetrically decoupled. In cases with magnetic asymmetry, a net flow of neutrals through the current sheet from the weak-field (high-density) upstream region into the strong-field upstream region results from a neutral pressure gradient. Consequently, neutrals dragged along with the outflow are more likely to originate from the weak-field region. The Hall effect leads to the development of a characteristic quadrupole magnetic field modified by asymmetry, but the X-point geometry expected during Hall reconnection does not occur. All simulations show the development of plasmoids after an initial laminar phase. 12. Effect of bolting on roadway support in extremely weak rock. PubMed Li, Qinghai; Shi, Weiping; Qin, Zhongcheng 2016-01-01 In mine roadway support operations, floor bolting not only played a role in floor heave control, but also in reinforcing roof and its two sides. Correspondingly, bolting of roof and two sides also played a part in floor heave control. To quantify the effect of such bolting, based on roadway support in extremely weak rock, three physical models were produced and tested in laboratory. Through comparison of their displacements in three physical simulation experiments, the reinforcing effect of bolting in extremely weak rock roadways was quantified. Reinforcing coefficients was defined as displacement ratio between original support and new support regime. Results indicated that the reinforcing coefficients, for bolting of roof and its two sides, on floor, two sides, and roof reached 2.18, 3.56, and 1.81 respectively. The reinforcing coefficients for floor bolting on floor, two sides, and roof reached 3.06, 2.34, and 1.39 respectively. So in this extremely weak rock, the surrounding rock should be considered as an integral structure in any support operation: this allows for better local strength improvement, and provided future design guidance. PMID:27588248 13. Limits on amplification by Aharonov-Albert-Vaidman weak measurement SciTech Connect Koike, Tatsuhiko; Tanaka, Saki 2011-12-15 We analyze the amplification by the Aharonov-Albert-Vaidman weak quantum measurement on a Sagnac interferometer [Dixon et al., Phys. Rev. Lett. 102, 173601 (2009)] up to all orders of the coupling strength between the measured system and the measuring device. The amplifier transforms a small tilt of a mirror into a large transverse displacement of the laser beam. The conventional analysis has shown that the measured value is proportional to the weak value, so that the amplification can be made arbitrarily large in the cost of decreasing output laser intensity. It is shown that the measured displacement and the amplification factor are in fact not proportional to the weak value and rather vanish in the limit of infinitesimal output intensity. We derive the optimal overlap of the pre- and postselected states with which the amplification become maximum. We also show that the nonlinear effects begin to arise in the performed experiments so that any improvements in the experiment, typically with an amplification greater than 100, should require the nonlinear theory in translating the observed value to the original displacement. 14. Notch strength of composites NASA Technical Reports Server (NTRS) Whitney, J. M. 1983-01-01 The notch strength of composites is discussed. The point stress and average stress criteria relate the notched strength of a laminate to the average strength of a relatively long tensile coupon. Tests of notched specimens in which microstrain gages have been placed at or near the edges of the holes have measured strains much larger that those measured in an unnotched tensile coupon. Orthotropic stress concentration analyses of failed notched laminates have also indicated that failure occurred at strains much larger than those experienced on tensile coupons with normal gage lengths. This suggests that the high strains at the edge of a hole can be related to the very short length of fiber subjected to these strains. Lockheed has attempted to correlate a series of tests of several laminates with holes ranging from 0.19 to 0.50 in. Although the average stress criterion correlated well with test results for hole sizes equal to or greater than 0.50 in., it over-estimated the laminate strength in the range of hole sizes from 0.19 to 0.38 in. It thus appears that a theory is needed that is based on the mechanics of failure and is more generally applicable to the range of hole sizes and the varieties of laminates found in aircraft construction. 15. High strength composites evaluation SciTech Connect Marten, S.M. 1992-02-01 A high-strength, thick-section, graphite/epoxy composite was identified. The purpose of this development effort was to evaluate candidate materials and provide LANL with engineering properties. Eight candidate materials (Samples 1000, 1100, 1200, 1300, 1400, 1500, 1600, and 1700) were chosen for evaluation. The Sample 1700 thermoplastic material was the strongest overall. 16. Gender Differences in Strength. ERIC Educational Resources Information Center Heyward, Vivian H.; And Others 1986-01-01 This investigation examined gender differences of 103 physically active men and women in upper and lower body strength as a function of lean body weight and the distribution of muscle and subcutaneous fat in the upper and lower limbs. Results are discussed. (Author/MT) 17. Spin-glass transition in geometrically frustrated antiferromagnets with weak disorder NASA Astrophysics Data System (ADS) Andreanov, A.; Chalker, J. T.; Saunders, T. E.; Sherrington, D. 2010-01-01 We study the effect in geometrically frustrated antiferromagnets of weak, random variations in the strength of exchange interactions. Without disorder the simplest classical models for these systems have macroscopically degenerate ground states, and this degeneracy may prevent ordering at any temperature. Weak exchange randomness favors a small subset of these ground states and induces a spin-glass transition at an ordering temperature determined by the amplitude of modulations in interaction strength. We use the replica approach to formulate a theory for this transition, showing that it falls into the same universality class as conventional spin-glass transitions. In addition, we show that a model with a low concentration of defect bonds can be mapped onto a system of randomly located pseudospins that have dipolar effective interactions. We also present detailed results from Monte Carlo simulations of the classical Heisenberg antiferromagnet on the pyrochlore lattice with weak randomness in nearest-neighbor exchange. 18. Multifluid magnetohydrodynamics of weakly ionized plasmas NASA Astrophysics Data System (ADS) Menzel, Raymond show that the total electric field in the asteroid may either be of comparable strength to the electric field predicted by Sonett et al. or vanish depending on the magnetic field geometry. We include the effects of dust grains in the gas and calculate the heating rates in the plasma flow due to ion-neutral scattering and viscous dissipation. We term this newly discovered heating mechanism "electrodynamic heating", use measurements of asteroid electrical conductivities to estimate the upper limits of the possible heating rates and amount of thermal energy that can be deposited in the solid body, and compare these to the heating produced by the decay of radioactive nuclei like Al26. For the second problem we modeled molecular line emission from time-dependent multifluid MHD shock waves in star-forming regions. By incorporating realistic radiative cooling by CO and H2 into the numerical method developed by Ciolek & Roberge (2013), we present the only current models of truly time-dependent multifluid MHD shock waves in weakly-ionized plasmas. Using the physical conditions determined by our models, we present predictions of molecular emission in the form of excitation diagrams, which can be compared to observations of protostellar outflows in order to trace the physical conditions of these environments. Current work focuses on creating models for varying initial conditions and shock ages, which are and will be the subject of several in progress studies of observed molecular outflows and will provide further insight into the physics and chemistry of these flows. 19. An assessment of the strengths and weaknesses of current veterinary systems in the developing world. PubMed Cheneau, Y; El Idrissi, A H; Ward, D 2004-04-01 The changes that veterinary services have undergone in the developing world over the last two decades are expected to continue and result in the further privatisation of selected tasks, the decentralisation of decision-making and a move towards more focus on public goods service delivery by State veterinary units. At the same time, global food consumption patterns are changing in numerous ways, which will certainly affect veterinary services delivery systems. These changes include a trend towards increasing globalisation, rapidly escalating consumer demand for animal protein, intensification of livestock production into larger units and growth of the trade of livestock and livestock products. Intensification of livestock production into larger units and global trade will increase the challenges resulting from the resurgence of serious animal diseases, food safety hazards and veterinary public health-related problems. Facing and managing these challenges raises issues related to animal health delivery systems and national policies that will have to be addressed. Strengthening the capacity of State veterinary units to respond to regulatory responsibilities dictated by national laws and international World Trade Organization and OIE (World organisation for animal health) health standards will be at the centre of animal health policies in most developing countries. Creating an environment which facilitates privatised service delivery and supports subcontracting is likely to contribute to improving economic efficiency and providing wider access to veterinary services. Equally important is the issue of professional development, which must be addressed by refocusing veterinary curricula and improving professional standards. The profession will then be in a better position to serve the needs of increasing numbers of consumers. 20. Strengths and weakness of neuroscientific investigations of childhood poverty: future directions. PubMed Lipina, Sebastián J; Segretin, M Soledad 2015-01-01 The neuroscientific study of child poverty is a topic that has only recently emerged. In comparison with previous reviews (e.g., Hackman and Farah, 2009; Lipina and Colombo, 2009; Hackman et al., 2010; Raizada and Kishiyama, 2010; Lipina and Posner, 2012), our perspective synthesizes findings, and summarizes both conceptual and methodological contributions, as well as challenges that face current neuroscientific approaches to the study of childhood poverty. The aim of this effort is to identify target areas of study that could potentially help build a basic and applied research agenda for the coming years. 1. An assessment of the strengths and weaknesses of current veterinary systems in the developing world. PubMed Cheneau, Y; El Idrissi, A H; Ward, D 2004-04-01 The changes that veterinary services have undergone in the developing world over the last two decades are expected to continue and result in the further privatisation of selected tasks, the decentralisation of decision-making and a move towards more focus on public goods service delivery by State veterinary units. At the same time, global food consumption patterns are changing in numerous ways, which will certainly affect veterinary services delivery systems. These changes include a trend towards increasing globalisation, rapidly escalating consumer demand for animal protein, intensification of livestock production into larger units and growth of the trade of livestock and livestock products. Intensification of livestock production into larger units and global trade will increase the challenges resulting from the resurgence of serious animal diseases, food safety hazards and veterinary public health-related problems. Facing and managing these challenges raises issues related to animal health delivery systems and national policies that will have to be addressed. Strengthening the capacity of State veterinary units to respond to regulatory responsibilities dictated by national laws and international World Trade Organization and OIE (World organisation for animal health) health standards will be at the centre of animal health policies in most developing countries. Creating an environment which facilitates privatised service delivery and supports subcontracting is likely to contribute to improving economic efficiency and providing wider access to veterinary services. Equally important is the issue of professional development, which must be addressed by refocusing veterinary curricula and improving professional standards. The profession will then be in a better position to serve the needs of increasing numbers of consumers. PMID:15200109 2. The strengths and weaknesses of inverted pendulum models of human walking. PubMed McGrath, Michael; Howard, David; Baker, Richard 2015-02-01 An investigation into the kinematic and kinetic predictions of two "inverted pendulum" (IP) models of gait was undertaken. The first model consisted of a single leg, with anthropometrically correct mass and moment of inertia, and a point mass at the hip representing the rest of the body. A second model incorporating the physiological extension of a head-arms-trunk (HAT) segment, held upright by an actuated hip moment, was developed for comparison. Simulations were performed, using both models, and quantitatively compared with empirical gait data. There was little difference between the two models' predictions of kinematics and ground reaction force (GRF). The models agreed well with empirical data through mid-stance (20-40% of the gait cycle) suggesting that IP models adequately simulate this phase (mean error less than one standard deviation). IP models are not cyclic, however, and cannot adequately simulate double support and step-to-step transition. This is because the forces under both legs augment each other during double support to increase the vertical GRF. The incorporation of an actuated hip joint was the most novel change and added a new dimension to the classic IP model. The hip moment curve produced was similar to those measured during experimental walking trials. As a result, it was interpreted that the primary role of the hip musculature in stance is to keep the HAT upright. Careful consideration of the differences between the models throws light on what the different terms within the GRF equation truly represent. 3. Identifying service quality strengths and weaknesses using SERVQUAL: a study of dental services. PubMed Kaldenberg, D; Becker, B W; Browne, B A; Browne, W G 1997-01-01 The goal of this study was to examine responses among dental patients to the most recent version of SERVQUAL, and to evaluate that instrument as a tool for measuring satisfaction in a dental practice. Items on the reliability and responsiveness dimensions produced the lowest satisfaction ratings, while improvements in providing services as promised and instilling confidence have the greatest potential for producing higher satisfaction among patients. Finally, using open-ended questions, we identified a number of patient events or experiences which caused either high or low scores on individual SERVQUAL items. 4. Strengths and Weaknesses in Reading Skills of Youth with Intellectual Disabilities ERIC Educational Resources Information Center Channell, Marie Moore; Loveall, Susan J.; Conners, Frances A. 2013-01-01 Reading-related skills of youth with intellectual disability (ID) were compared with those of typically developing (TD) children of similar verbal ability level. The group with ID scored lower than the TD group on word recognition and phonological decoding, but similarly on orthographic processing and rapid automatized naming (RAN). Further,… 5. Strengths and weakness of neuroscientific investigations of childhood poverty: future directions PubMed Central Lipina, Sebastián J.; Segretin, M. Soledad 2015-01-01 The neuroscientific study of child poverty is a topic that has only recently emerged. In comparison with previous reviews (e.g., Hackman and Farah, 2009; Lipina and Colombo, 2009; Hackman et al., 2010; Raizada and Kishiyama, 2010; Lipina and Posner, 2012), our perspective synthesizes findings, and summarizes both conceptual and methodological contributions, as well as challenges that face current neuroscientific approaches to the study of childhood poverty. The aim of this effort is to identify target areas of study that could potentially help build a basic and applied research agenda for the coming years. PMID:25717299 6. Understanding Learner Strengths and Weaknesses: Assessing Performance on an Integrated Writing Task ERIC Educational Resources Information Center Sawaki, Yasuyo; Quinlan, Thomas; Lee, Yong-Won 2013-01-01 The present study examined the factor structures across features of 446 examinees' responses to a writing task that integrates reading and listening modalities as well as reading and listening comprehension items of the TOEFL iBT[R] (Internet-based test). Both human and automated scores obtained for the integrated essays were utilized. Based on a… 7. Strengths and Weaknesses in Executive Functioning in Children with Intellectual Disability ERIC Educational Resources Information Center Danielsson, Henrik; Henry, Lucy; Messer, David; Ronnberg, Jerker 2012-01-01 Children with intellectual disability (ID) were given a comprehensive range of executive functioning measures, which systematically varied in terms of verbal and non-verbal demands. Their performance was compared to the performance of groups matched on mental age (MA) and chronological age (CA), respectively. Twenty-two children were included in… 8. Specific Learning Disability Identification: What Constitutes a Pattern of Strengths and Weaknesses? ERIC Educational Resources Information Center Schultz, Edward Karl; Simpson, Cynthia G.; Lynch, Sharon 2012-01-01 The 2004 Individuals with Disabilities Education Improvement Act (IDEA) and subsequent regulations published in 2006 have significantly changed the identification process for students suspected of having specific learning disabilities. Rather than using a discrepancy model contrasting intellectual and achievement test results, assessment… 9. The Vienna Temperature Series: Strengths and weaknesses for the use in climate change analyses NASA Astrophysics Data System (ADS) Auer, I.; Böhm, R.; Gruber, C.; Jurković, A. 2010-09-01 Strakosch-Grassmann (1932*) reports about the first instrumental measurements in Vienna in 1697 for a span of eight months only. Later, continuous measurements have been carried out at the observatory of the Jesuit College since 1734, at the k.k. Universitätssternwarte (astronomical observatory of the University of Vienna) since 1762. Unfortunately, most of the data before 1775 have gone lost. The HISTALP (http://www.zamg.ac.at/histalp) temperature series of Vienna is a composite of Wien-Universitätssternwarte, Wien-Favoritenstraße und Wien-Hohe Warte. It allows studying climate variability for more than 235 years and it has been used very often in national and international studies. Although the Vienna series has been homogenized and quality checked with highest carefulness some remaining uncertainties are persisting. This especially concerns uncertainties of the very early measurements due to insufficient sheltering, and measurements of the last 60 years due to an increasing trend of the Viennese urban heat island. *Strakosch-Graßmann G, 1932. Neue Quellen zur Geschichte der Witterung in Europa vom 16. Bis zum 18. Jahrhundert. Met. Zeitschr. 1932, S 397. 10. Religion as a Source of Strength or Weakness in Young Adult Literature. ERIC Educational Resources Information Center Fuchs, Lucy A survey of books for young people reveals that some of the best (and even award-winning) novels deal with the controversial issue of religion. Although most of these books deal with religion only in the background, some clearly present this issue in the forefront. One book, Cynthia Rylant's "A Fine White Dust" (1986), traces a religious quest. In… 11. Strengths and Weaknesses of the Information Technology Curriculum in Library and Information Science Graduate Programs ERIC Educational Resources Information Center Singh, Vandana; Mehra, Bharat 2013-01-01 This research highlights the status of the information technology (IT) skills and competencies being taught at LIS schools in the United States. Results list specific IT topics that the library schools are teaching and the ones that are missing from the curriculum. Based on a literature review these skills are then juxtaposed with the expectations… 12. Smaller Forbush Decreases in Solar Cycle 24: Effect of the Weak CME Field Strength? NASA Astrophysics Data System (ADS) Thakur, N. 2015-12-01 A Forbush decrease (FD) is a sudden depression in the intensity of galactic cosmic ray (GCR) background, followed by a gradual recovery. One of the major causes of FDs is the presence of magnetic structures such as magnetic clouds (MCs) or corotating interaction regions (CIRs) that have enhanced magnetic field, which can scatter particles away reducing the observed GCR intensity. Recent work (Gopalswamy et al. 2014, GRL 41, 2673) suggests that coronal mass ejections (CMEs) are expanding anomalously in solar cycle 24 due to the reduced total pressure in the ambient medium. One of the consequences of the anomalous expansion is the reduced magnetic content of MCs, so we expect subdued FDs in cycle 24. In this paper, we present preliminary results from a survey of FDs during MC events in cycle 24 in comparison with those in cycle 23. We find that only ~17% FDs in cycle 24 had an amplitude >3%, as compared to ~31% in cycle 23. This result is consistent with the difference in the maximum magnetic field intensities (Bmax) of MCs in the two cycles: only ~ 10% of MCs in cycle 24 have Bmax>20nT, compared to 22% in cycle 23, confirming that MCs of cycle 24 have weaker magnetic field content. Therefore, we suggest that weaker magnetic field intensity in the magnetic clouds of cycle 24 has led to FDs with smaller amplitudes. 13. Perceptions of Online TESOL Teacher Education: Strengths, Weaknesses, Characteristics, and Effective Components ERIC Educational Resources Information Center Chen, Susan Tiffany 2012-01-01 Recent and ongoing expansion of online opportunities for teacher education and training continue in response to calls for better teacher preparation and professional development opportunities. However, with the introduction of online learning, the already controversial debate over educational technology has taken on a new dimension. Today's… 14. Social Network Perspectives Reveal Strength of Academic Developers as Weak Ties ERIC Educational Resources Information Center Matthews, Kelly E.; Crampton, Andrea; Hill, Matthew; Johnson, Elizabeth D.; Sharma, Manjula D.; Varsavsky, Cristina 2015-01-01 Social network perspectives acknowledge the influence of disciplinary cultures on academics' teaching beliefs and practices with implications for academic developers. The contribution of academic developers in 18 scholarship of teaching and learning (SoTL) projects situated in the sciences are explored by drawing on data from a two-year national… 15. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses. PubMed Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios 2008-02-01 The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information. 16. Internal Quality Assurance Systems in Portugal: What Their Strengths and Weaknesses Reveal ERIC Educational Resources Information Center Tavares, Orlanda; Sin, Cristina; Amaral, Alberto 2016-01-01 In Portugal, the agency for assessment and accreditation of higher education has recently included in its remit, beyond programme accreditation, the certification of internal quality assurance systems. This implies lighter touch accreditation and aims to direct institutions towards improvement, in addition to accountability. Twelve institutions… 17. Integrating Social Networking Tools into ESL Writing Classroom: Strengths and Weaknesses ERIC Educational Resources Information Center Yunus, Melor Md; Salehi, Hadi; Chenzi, Chen 2012-01-01 With the rapid development of world and technology, English learning has become more important. Teachers frequently use teacher-centered pedagogy that leads to lack of interaction with students. This paper aims to investigate the advantages and disadvantages of integrating social networking tools into ESL writing classroom and discuss the ways to… 18. Strategic marketing management for health management: cross impact matrix and TOWS (threats, opportunities, weaknesses, strengths). PubMed Proctor, T 2000-01-01 Organisations operate within a three-tiered environment--internal, micro and macro. The environment is a powerful force acting upon the effectiveness of strategic decision making. Failure to take cognisance of the influence of the three-tiered environment can have disastrous consequences. The cross-impact matrix and the TOWS matrix are two strategic decision-making aids that improve effective decision making. When used in conjunction with creative problem solving methods they can provide the basis of a powerful management tool. 19. Young People's Satisfaction with Residential Care: Identifying Strengths and Weaknesses in Service Delivery ERIC Educational Resources Information Center Southwell, Jenni; Fraser, Elizabeth 2010-01-01 This paper presents findings from a landmark Australian study investigating the experiences and perspectives of young people in residential care. Data from a representative sample are analyzed to identify young people's satisfaction with various aspects of their residential care experience: their sense of safety, normality, support, comfort in… 20. Commercial Crop Yields Reveal Strengths and Weaknesses for Organic Agriculture in the United States PubMed Central Savage, Steven D.; Jabbour, Randa 2016-01-01 Land area devoted to organic agriculture has increased steadily over the last 20 years in the United States, and elsewhere around the world. A primary criticism of organic agriculture is lower yield compared to non-organic systems. Previous analyses documenting the yield deficiency in organic production have relied mostly on data generated under experimental conditions, but these studies do not necessarily reflect the full range of innovation or practical limitations that are part of commercial agriculture. The analysis we present here offers a new perspective, based on organic yield data collected from over 10,000 organic farmers representing nearly 800,000 hectares of organic farmland. We used publicly available data from the United States Department of Agriculture to estimate yield differences between organic and conventional production methods for the 2014 production year. Similar to previous work, organic crop yields in our analysis were lower than conventional crop yields for most crops. Averaged across all crops, organic yield averaged 80% of conventional yield. However, several crops had no significant difference in yields between organic and conventional production, and organic yields surpassed conventional yields for some hay crops. The organic to conventional yield ratio varied widely among crops, and in some cases, among locations within a crop. For soybean (Glycine max) and potato (Solanum tuberosum), organic yield was more similar to conventional yield in states where conventional yield was greatest. The opposite trend was observed for barley (Hordeum vulgare), wheat (Triticum aestevum), and hay crops, however, suggesting the geographical yield potential has an inconsistent effect on the organic yield gap. PMID:27552217 1. PS1-36: VDW Pharmacy File: Strengths, Weaknesses, and Recommendations PubMed Central Moore, Kristen M; Cheetham, T Craig; Dublin, Sascha; Hecht, Julia; Robinson, Scott; Brown, Jeffrey S 2010-01-01 Background: The HMO Research Network (HMORN) Virtual Data Warehouse (VDW) is a series of dataset standards and automated processes designed to facilitate the process of multisite research. It is virtual in that there is no centralized database; data continue to reside locally. Our objective was to assess the cross-site availability and completeness of the outpatient prescription dispensing or Pharmacy file. Methods: The VDW Pharmacy Working Group created a data checking plan to assess overall data availability and completeness of the Pharmacy file. A distributed SAS program was run at each HMORN site with a VDW Pharmacy file (n=11), with de-identified summary count data returned for analysis. The key Pharmacy file variables were National Drug Code (NDC), days supplied, and amount dispensed. NDCs were considered nonstandard for these reasons: not having exactly 11 digits; containing a character; or consisting of a single repeated number (e.g., 99999999999). For days supplied and amount dispensed, values of < 0, 0, > 500, or missing and values of < 0, 0, > 1000, or missing were considered out of typical range, respectively. Results: Eleven sites have Pharmacy data from 2000 June 2007; some have data going back more than 10 years. There were 61 million dispensings in 2007, with an average 5.1 million monthly dispensings among 2 million monthly users. Average monthly dispensings per user was 2.5, and the range across sites was 2.2–2.9. Across all sites from 2000–2007, < 0.1% and 0.3% of dispensings had a missing or nonstandard NDC, respectively; and 0.1% and 0.3% had days supplied and amount dispensed out of typical range, respectively. Few dispensings were for over-the-counter medications (5%), medical supplies (2%), or infusions (0.1%). 89% of elderly (age > 65) with a drug benefit had at least 1 annual dispensing (average monthly dispensings per user, 3.5), compared to 52% of those without a benefit (average, 3.2). Conclusions: The VDW Pharmacy file has good overall data quality, with few problems found, especially since 2000. Investigators can have confidence in the data in the Pharmacy file but may choose to limit studies to more recent years. 2. Commercial Crop Yields Reveal Strengths and Weaknesses for Organic Agriculture in the United States. PubMed Kniss, Andrew R; Savage, Steven D; Jabbour, Randa 2016-01-01 Land area devoted to organic agriculture has increased steadily over the last 20 years in the United States, and elsewhere around the world. A primary criticism of organic agriculture is lower yield compared to non-organic systems. Previous analyses documenting the yield deficiency in organic production have relied mostly on data generated under experimental conditions, but these studies do not necessarily reflect the full range of innovation or practical limitations that are part of commercial agriculture. The analysis we present here offers a new perspective, based on organic yield data collected from over 10,000 organic farmers representing nearly 800,000 hectares of organic farmland. We used publicly available data from the United States Department of Agriculture to estimate yield differences between organic and conventional production methods for the 2014 production year. Similar to previous work, organic crop yields in our analysis were lower than conventional crop yields for most crops. Averaged across all crops, organic yield averaged 80% of conventional yield. However, several crops had no significant difference in yields between organic and conventional production, and organic yields surpassed conventional yields for some hay crops. The organic to conventional yield ratio varied widely among crops, and in some cases, among locations within a crop. For soybean (Glycine max) and potato (Solanum tuberosum), organic yield was more similar to conventional yield in states where conventional yield was greatest. The opposite trend was observed for barley (Hordeum vulgare), wheat (Triticum aestevum), and hay crops, however, suggesting the geographical yield potential has an inconsistent effect on the organic yield gap. 3. Identifying service quality strengths and weaknesses using SERVQUAL: a study of dental services. PubMed Kaldenberg, D; Becker, B W; Browne, B A; Browne, W G 1997-01-01 The goal of this study was to examine responses among dental patients to the most recent version of SERVQUAL, and to evaluate that instrument as a tool for measuring satisfaction in a dental practice. Items on the reliability and responsiveness dimensions produced the lowest satisfaction ratings, while improvements in providing services as promised and instilling confidence have the greatest potential for producing higher satisfaction among patients. Finally, using open-ended questions, we identified a number of patient events or experiences which caused either high or low scores on individual SERVQUAL items. PMID:10179451 4. Changing patterns of intellectual strengths and weaknesses in males with fragile X syndrome. PubMed Hodapp, R M; Dykens, E M; Ort, S I; Zelinsky, D G; Leckman, J F 1991-12-01 Examined the changing profiles of intelligence in males with fragile X syndrome as these individuals increased in chronological age. Using a psychometric instrument designed to measure styles of information processing, 21 males aged 4 to 27 years were examined cross-sectionally in sequential processing, simultaneous processing, and achievement. The age of the subject was associated with age-equivalent levels of both simultaneous processing and achievement, but fragile X males did not show higher levels of sequential processing with increasing chronological age. Compared to younger fragile X males, the older subjects were more delayed in sequential processing skills relative to their in other areas. A smaller longitudinal study confirmed the presence of a plateau in sequential processing among those subjects tested two times after the age of 10 years. Implications are discussed for diagnosis, intervention, and the matching of subject groups in mental retardation research. 5. Commercial Crop Yields Reveal Strengths and Weaknesses for Organic Agriculture in the United States. PubMed Kniss, Andrew R; Savage, Steven D; Jabbour, Randa 2016-01-01 Land area devoted to organic agriculture has increased steadily over the last 20 years in the United States, and elsewhere around the world. A primary criticism of organic agriculture is lower yield compared to non-organic systems. Previous analyses documenting the yield deficiency in organic production have relied mostly on data generated under experimental conditions, but these studies do not necessarily reflect the full range of innovation or practical limitations that are part of commercial agriculture. The analysis we present here offers a new perspective, based on organic yield data collected from over 10,000 organic farmers representing nearly 800,000 hectares of organic farmland. We used publicly available data from the United States Department of Agriculture to estimate yield differences between organic and conventional production methods for the 2014 production year. Similar to previous work, organic crop yields in our analysis were lower than conventional crop yields for most crops. Averaged across all crops, organic yield averaged 80% of conventional yield. However, several crops had no significant difference in yields between organic and conventional production, and organic yields surpassed conventional yields for some hay crops. The organic to conventional yield ratio varied widely among crops, and in some cases, among locations within a crop. For soybean (Glycine max) and potato (Solanum tuberosum), organic yield was more similar to conventional yield in states where conventional yield was greatest. The opposite trend was observed for barley (Hordeum vulgare), wheat (Triticum aestevum), and hay crops, however, suggesting the geographical yield potential has an inconsistent effect on the organic yield gap. PMID:27552217 6. Socio-Cognitive Understanding: A Strength or Weakness in Down's Syndrome? ERIC Educational Resources Information Center Wishart, J. G. 2007-01-01 Background: Social understanding is often thought to be relatively "protected" in children with Down's syndrome (DS) and to underlie the outgoing personality characteristically attributed to them. This paper draws together findings from our studies of behaviours during object concept testing, generally considered a theoretically "pure" measure of… 7. Strengths and weaknesses in reading skills of youth with intellectual disabilities PubMed Central Channell, Marie Moore; Loveall, Susan J.; Conners, Frances A. 2016-01-01 Reading-related skills of youth with intellectual disability (ID) were compared with those of typically developing (TD) children of similar verbal ability level. The group with ID scored lower than the TD group on word recognition and phonological decoding, but similarly on orthographic processing and rapid automatized naming (RAN). Further, phonological decoding significantly mediated the relation between group membership and word recognition, whereas neither orthographic processing nor RAN did so. The group with ID also underperformed the TD group on phonological awareness and phonological memory, both of which significantly mediated the relation between group membership and phonological decoding. These data suggest that poor word recognition in youth with ID may be due largely to poor phonological decoding, which in turn may be due largely to poor phonological awareness and poor phonological memory. More focus on phonological skills in the classroom may help students with ID to develop better word recognition skills. PMID:23220054 8. [Constructing the nursing profession from the perspective of gender. A weakness or strength?]. PubMed Rodríguez García, Marta; Martínez Miguel, Esther; Tovar Reinoso, Alberto; González Hervias, Raquel; Goday Arean, Carmen; García Salinero, Julia 2009-01-01 To care for someone is a life-giving act whose existence goes so far back that it is lost in human memory. Despite the fact that the concept of health has evolved over the course of time, there is a constant factor, both in the social as in the practical context which ties care-giving implicitly to women. Innate qualities in the feminine sex, including vocation, kindness, dedication, softness, have contributed to undervalue professional work and to grant it little social recognition. As a group, nurses have been placed in the paradigm of oppression, due to the hierarchical vigilance medicine, a field located at the apex of power; has exercised over this profession. At present times, the humanization of caregiving and the role change by patients act favorably so that nurses begin to question the current situation their profession is in and to ask what can be done so their profession evolves independently 9. The strengths and weaknesses of inverted pendulum models of human walking. PubMed McGrath, Michael; Howard, David; Baker, Richard 2015-02-01 An investigation into the kinematic and kinetic predictions of two "inverted pendulum" (IP) models of gait was undertaken. The first model consisted of a single leg, with anthropometrically correct mass and moment of inertia, and a point mass at the hip representing the rest of the body. A second model incorporating the physiological extension of a head-arms-trunk (HAT) segment, held upright by an actuated hip moment, was developed for comparison. Simulations were performed, using both models, and quantitatively compared with empirical gait data. There was little difference between the two models' predictions of kinematics and ground reaction force (GRF). The models agreed well with empirical data through mid-stance (20-40% of the gait cycle) suggesting that IP models adequately simulate this phase (mean error less than one standard deviation). IP models are not cyclic, however, and cannot adequately simulate double support and step-to-step transition. This is because the forces under both legs augment each other during double support to increase the vertical GRF. The incorporation of an actuated hip joint was the most novel change and added a new dimension to the classic IP model. The hip moment curve produced was similar to those measured during experimental walking trials. As a result, it was interpreted that the primary role of the hip musculature in stance is to keep the HAT upright. Careful consideration of the differences between the models throws light on what the different terms within the GRF equation truly represent. PMID:25468688 10. SURVEY AND SUMMARY: Current methods of gene prediction, their strengths and weaknesses PubMed Central Mathé, Catherine; Sagot, Marie-France; Schiex, Thomas; Rouzé, Pierre 2002-01-01 While the genomes of many organisms have been sequenced over the last few years, transforming such raw sequence data into knowledge remains a hard task. A great number of prediction programs have been developed that try to address one part of this problem, which consists of locating the genes along a genome. This paper reviews the existing approaches to predicting genes in eukaryotic genomes and underlines their intrinsic advantages and limitations. The main mathematical models and computational algorithms adopted are also briefly described and the resulting software classified according to both the method and the type of evidence used. Finally, the several difficulties and pitfalls encountered by the programs are detailed, showing that improvements are needed and that new directions must be considered. PMID:12364589 11. Institutional Assessment Tools for Sustainability in Higher Education: Strengths, Weaknesses, and Implications for Practice and Theory ERIC Educational Resources Information Center Shriberg, Michael 2002-01-01 This paper analyzes recent efforts to measure sustainability in higher education across institutions. The benefits of cross-institutional assessments include: identifying and benchmarking leaders and best practices; communicating common goals, experiences, and methods; and providing a directional tool to measure progress toward the concept of a… 12. Weak rigidity in the PPN formalism SciTech Connect del Olmo, V.; Olivert, J. 1987-04-01 The influence of the concept of weakly rigid almost-thermodynamic material schemes on the classical deformations is analyzed. The methods of the PPN approximation are considered. In this formalism, the equations that characterize the weak rigidity are expressed. As a consequence of that, an increase of two orders of magnitude in the strain rate tensor is obtained. 13. On modeling weak sinks in MODPATH USGS Publications Warehouse Abrams, Daniel B.; Haitjema, Henk; Kauffman, Leon J. 2012-01-01 Regional groundwater flow systems often contain both strong sinks and weak sinks. A strong sink extracts water from the entire aquifer depth, while a weak sink lets some water pass underneath or over the actual sink. The numerical groundwater flow model MODFLOW may allow a sink cell to act as a strong or weak sink, hence extracting all water that enters the cell or allowing some of that water to pass. A physical strong sink can be modeled by either a strong sink cell or a weak sink cell, with the latter generally occurring in low resolution models. Likewise, a physical weak sink may also be represented by either type of sink cell. The representation of weak sinks in the particle tracing code MODPATH is more equivocal than in MODFLOW. With the appropriate parameterization of MODPATH, particle traces and their associated travel times to weak sink streams can be modeled with adequate accuracy, even in single layer models. Weak sink well cells, on the other hand, require special measures as proposed in the literature to generate correct particle traces and individual travel times and hence capture zones. We found that the transit time distributions for well water generally do not require special measures provided aquifer properties are locally homogeneous and the well draws water from the entire aquifer depth, an important observation for determining the response of a well to non-point contaminant inputs. 14. Spin Seebeck effect in a weak ferromagnet NASA Astrophysics Data System (ADS) Arboleda, Juan David; Arnache Olmos, Oscar; Aguirre, Myriam Haydee; Ramos, Rafael; Anadon, Alberto; Ibarra, Manuel Ricardo 2016-06-01 We report the observation of room temperature spin Seebeck effect (SSE) in a weak ferromagnetic normal spinel Zinc Ferrite (ZFO). Despite the weak ferromagnetic behavior, the measurements of the SSE in ZFO show a thermoelectric voltage response comparable with the reported values for other ferromagnetic materials. Our results suggest that SSE might possibly originate from the surface magnetization of the ZFO. 15. Staggering towards a calculation of weak amplitudes SciTech Connect Sharpe, S.R. 1988-09-01 An explanation is given of the methods required to calculate hadronic matrix elements of the weak Hamiltonians using lattice QCD with staggered fermions. New results are presented for the 1-loop perturbative mixing of the weak interaction operators. New numerical techniques designed for staggered fermions are described. A preliminary result for the kaon B parameter is presented. 24 refs., 3 figs. 16. CP Violation, Neutral Currents, and Weak Equivalence DOE R&D Accomplishments Database Fitch, V. L. 1972-03-23 Within the past few months two excellent summaries of the state of our knowledge of the weak interactions have been presented. Correspondingly, we will not attempt a comprehensive review but instead concentrate this discussion on the status of CP violation, the question of the neutral currents, and the weak equivalence principle. 17. Advances in weak-values based metrology NASA Astrophysics Data System (ADS) Jordan, Andrew; Viza, Gerardo; Martínez-Rincón, Julián; Alves, Gabriel; Howell, John; Kwiat, Paul 2015-03-01 We theoretically and experimentally describe the relative advantages of implementing weak-values-based metrology versus standard methods. To accomplish this, we measure small optical beam deflections both a weak-values-based technique, and a standard technique. By introducing controlled external modulations of the detector, and transverse beam-jitter, we quantify the mitigation of these sources in the weak values-based experiment versus the standard experiment. In all cases, the weak-values technique performs the same or better than the standard technique by up to two orders of magnitude in precision for our parameters. We further measure the statistical efficiency of the weak-values-based technique. By post-selecting on 1% of the photons, we obtain 99% of the available Fisher information of the beam deflection parameter. We also discuss ways to recycle the discarded events, obtaining much greater precision on a measured parameter. 18. Atomic homodyne detection of weak atomic transitions. PubMed Gunawardena, Mevan; Elliott, D S 2007-01-26 We have developed a two-color, two-pathway coherent control technique to detect and measure weak optical transitions in atoms by coherently beating the transition amplitude for the weak transition with that of a much stronger transition. We demonstrate the technique in atomic cesium, exciting the 6s(2)S(1/2) --> 8s(2)S(1/2) transition via a strong two-photon transition and a weak controllable Stark-induced transition. We discuss the enhancement in the signal-to-noise ratio for this measurement technique over that of direct detection of the weak transition rate, and project future refinements that may further improve its sensitivity and application to the measurement of other weak atomic interactions. 19. Weakly nonlinear electrophoresis of a highly charged colloidal particle NASA Astrophysics Data System (ADS) Schnitzer, Ory; Zeyde, Roman; Yavneh, Irad; Yariv, Ehud 2013-05-01 At large zeta potentials, surface conduction becomes appreciable in thin-double-layer electrokinetic transport. In the linear weak-field regime, where this effect is quantified by the Dukhin number, it is manifested in non-Smoluchowski electrophoretic mobilities. In this paper we go beyond linear response, employing the recently derived macroscale model of Schnitzer and Yariv ["Macroscale description of electrokinetic flows at large zeta potentials: Nonlinear surface conduction," Phys. Rev. E 86, 021503 (2012), 10.1103/PhysRevE.86.021503] as the infrastructure for a weakly nonlinear analysis of spherical-particle electrophoresis. A straightforward perturbation in the field strength is frustrated by the failure to satisfy the far-field conditions, representing a non-uniformity of the weak-field approximation at large distances away from the particle, where salt advection becomes comparable to diffusion. This is remedied using inner-outer asymptotic expansions in the spirit of Acrivos and Taylor ["Heat and mass transfer from single spheres in Stokes flow," Phys. Fluids 5, 387 (1962), 10.1063/1.1706630], with the inner region representing the particle neighborhood and the outer region corresponding to distances scaling inversely with the field magnitude. This singular scheme furnishes an asymptotic correction to the electrophoretic velocity, proportional to the applied field cubed, which embodies a host of nonlinear mechanisms unfamiliar from linear electrokinetic theories. These include the effect of induced zeta-potential inhomogeneity, animated by concentration polarization, on electro-osmosis and diffuso-osmosis; bulk advection of salt; nonuniform bulk conductivity; Coulomb body forces acting on bulk volumetric charge; and the nonzero electrostatic force exerted upon the otherwise screened particle-layer system. A numerical solution of the macroscale model validates our weakly nonlinear analysis. 20. Weak positive cloud-to-ground flashes in Northeastern Colorado NASA Technical Reports Server (NTRS) Lopez, Raul E.; Maier, Michael W.; Garcia-Miguel, Juan A.; Holle, Ronald L. 1991-01-01 The frequency distributions of the peak magnetic field associated with the first detected return stroke of positive and negative cloud-to-ground (CG) flashes were studied using lightning data from northeastern Colorado. These data were obtained during 1985 with a medium-to-high gain network of three direction finders (DF's). The median signal strength of positive flashes was almost two times that of the negatives for flashes within 300 km of the DF's, which have an inherent detection-threshold bias that tends to discriminate against weak signals. This bias increases with range, and affects the detection of positive and negative flashes in different ways, because of the differing character of their distributions. Positive flashes appear to have a large percentage of signals clustered around very weak values that are lost to the medium-to-high gain Colorado Detection System very quickly with increasing range. The resulting median for positive signals could thus appear to be much larger than the median for negative signals, which are more clustered around intermediate values. When only flashes very close to the DF's are considered, however, the two distributions have almost identical medians. The large percentage of weak positive signals detected close to the DF's has not been explored previously. They have been suggested to come from intracloud discharges and thus are improperly classified as CG flashes. Evidence in hand, points to their being real positive, albeit weak CG flashes. Whether or not they are real positive ground flashes, it is important to be aware of their presence in data from magnetic DF networks. 1. Strength calculations on airplanes NASA Technical Reports Server (NTRS) Baumann, A 1925-01-01 Every strength calculation, including those on airplanes, must be preceded by a determination of the forces to be taken into account. In the following discussion, it will be assumed that the magnitudes of these forces are known and that it is only a question of how, on the basis of these known forces, to meet the prescribed conditions on the one hand and the practical requirements on the other. 2. Spectrum of Mathematical Weaknesses: Related Neuropsychological Correlates. PubMed Perna, Robert; Loughan, Ashlee R; Le, Jessica; Hertza, Jeremy; Cohen, Morris J 2015-01-01 Math disorders have been recognized for as long as language disorders yet have received far less research. Mathematics is a complex construct and its development may be dependent on multiple cognitive abilities. Several studies have shown that short-term memory, working memory, visuospatial skills, processing speed, and various language skills relate to and may facilitate math development and performance. The hypotheses explored in this research were that children who performed worse on math achievement than on Full-Scale IQ would exhibit weaknesses in executive functions, memory, and visuoperceptual skills. Participants included 436 children (27% girls, 73% boys; age range = 5-17 years, M(age) = 9.45 years) who were referred for neuropsychological evaluations due to academic and/or behavioral problems. This article specifically focuses on the spectrum of math weakness rather than clinical disability, which has yet to be investigated in the literature. Results suggest that children with relative weakness to impairments in math were significantly more likely to have cognitive weaknesses to impairments on neuropsychological variables, as compared with children without math weaknesses. Specifically, the math-weak children exhibit a weakness to impairment on measures involving attention, language, visuoperceptual skills, memory, reading, and spelling. Overall, our results suggest that math development is multifaceted. PMID:25117216 3. Dynamics of Weak, Bifurcated and Strong Hydrogen Bonds in Lithium Nitrate Trihydrate SciTech Connect Werhahn, Jasper C.; Pandelov, S.; Xantheas, Sotiris S.; Iglev, H. 2011-07-07 The properties of three distinct types of hydrogen bonds, namely a weak, a bifurcated and a strong one, all present in/the LiNO3 (HDO)(D2O)2 hydrate lattice unit cell are studied using steady-state and time-resolved spectroscopy. The lifetimes of the OH stretching vibrations for the three individual bonds are 2.2 ps (weak), 1.7 ps (bifurcated), and 1.2 ps (strong), respectively. For the first time the properties of bifurcated H bonds can thus be unambiguously directly compared to those of weak and strong H bonds in the same system. The values of their OH stretching vibration lifetime, anharmonicity, red shift and bond strength lie between those for the strong and weak H bonds. The experimentally observed inhomogeneous broadening of their spectral signature is attributed to the coupling with a low frequency intermolecular wagging vibration/ 4. Abdominal muscle and quadriceps strength in chronic obstructive pulmonary disease PubMed Central Man, W; Hopkinson, N; Harraf, F; Nikoletou, D; Polkey, M; Moxham, J 2005-01-01 Background: Quadriceps muscle weakness is common in chronic obstructive pulmonary disease (COPD) but is not observed in a small hand muscle (adductor pollicis). Although this could be explained by reduced activity in the quadriceps, the observation could also be explained by anatomical location of the muscle or fibre type composition. However, the abdominal muscles are of a similar anatomical and fibre type distribution to the quadriceps, although they remain active in COPD. Cough gastric pressure is a recently described technique that assesses abdominal muscle (and hence expiratory muscle) strength more accurately than traditional techniques. A study was undertaken to test the hypothesis that more severe weakness exists in the quadriceps than in the abdominal muscles of patients with COPD compared with healthy elderly controls. Methods: Maximum cough gastric pressure and quadriceps isometric strength were measured in 43 patients with stable COPD and 25 healthy elderly volunteers matched for anthropometric variables. Results: Despite a significant reduction in mean quadriceps strength (29.9 kg v 41.2 kg; 95% CI –17.9 to –4.6; p = 0.001), cough gastric pressure was preserved in patients with COPD (227.3 cm H2O v 204.8 cm H2O; 95% CI –5.4 to 50.6; p = 0.11). Conclusions: Abdominal muscle strength is preserved in stable COPD outpatients in the presence of quadriceps weakness. This suggests that anatomical location and fibre type cannot explain quadriceps weakness in COPD. By inference, we conclude that disuse and consequent deconditioning are important factors in the development of quadriceps muscle weakness in COPD patients, or that activity protects the abdominal muscles from possible systemic myopathic processes. PMID:15923239 5. Structural features of sequential weak measurements NASA Astrophysics Data System (ADS) Diósi, Lajos 2016-07-01 We discuss the abstract structure of sequential weak measurement (WM) of general observables. In all orders, the sequential WM correlations without postselection yield the corresponding correlations of the Wigner function, offering direct quantum tomography through the moments of the canonical variables. Correlations in spin-1/2 sequential weak measurements coincide with those in strong measurements, they are constrained kinematically, and they are equivalent with single measurements. In sequential WMs with postselection, an anomaly occurs, different from the weak value anomaly of single WMs. In particular, the spread of polarization σ ̂ as measured in double WMs of σ ̂ will diverge for certain orthogonal pre- and postselected states. 6. Complex weak values in quantum measurement SciTech Connect Jozsa, Richard 2007-10-15 In the weak value formalism of Aharonov et al., the weak value A{sub w} of any observable A is generally a complex number. We derive a physical interpretation of its value in terms of the shift in the measurement pointer's mean position and mean momentum. In particular, we show that the mean position shift contains a term jointly proportional to the imaginary part of the weak value and the rate at which the pointer is spreading in space as it enters the measurement interaction. 7. Weak side of strong topological insulators NASA Astrophysics Data System (ADS) Sbierski, Björn; Schneider, Martin; Brouwer, Piet W. 2016-04-01 Strong topological insulators may have nonzero weak indices. The nonzero weak indices allow for the existence of topologically protected helical states along line defects of the lattice. If the lattice admits line defects that connect opposite surfaces of a slab of such a "weak-and-strong" topological insulator, these states effectively connect the surface states at opposite surfaces. Depending on the phases accumulated along the dislocation lines, this connection results in a suppression of in-plane transport and the opening of a spectral gap or in an enhanced density of states and an increased conductivity. 8. What controls the strength and brittleness of shale rocks? NASA Astrophysics Data System (ADS) Rybacki, Erik; Reinicke, Andreas; Meier, Tobias; Makasi, Masline; Dresen, Georg 2014-05-01 With respect to the productivity of gas shales, in petroleum science the mechanical behavior of shales is often classified into rock types of high and low 'brittleness', sometimes also referred to as 'fraccability'. The term brittleness is not well defined and different definitions exist, associated with elastic properties (Poisson's ratio, Young's modulus), with strength parameters (compressive and tensile strength), frictional properties (cohesion, friction coefficient), hardness (indentation), or with the strain or energy budget (ratio of reversible to the total strain or energy, respectively). Shales containing a high amount of clay and organic matter are usually considered as less brittle. Similarly, the strength of shales is usually assumed to be low if they contain a high fraction of weak phases. We performed mechanical tests on a series of shales with different mineralogical compositions, varying porosity, and low to high maturity. Using cylindrical samples, we determined the uniaxial and triaxial compressive strength, static Young's modulus, the tensile strength, and Mode I fracture toughness. The results show that in general the uniaxial compressive strength (UCS) linearly increases with increasing Young's modulus (E) and both parameters increase with decreasing porosity. However, the strength and elastic modulus is not uniquely correlated with the mineral content. For shales with a relatively low quartz and high carbonate content, UCS and E increase with increasing quartz content, whereas for shales with a relatively low amount for carbonates, but high quartz content, both parameters increase with decreasing fraction of the weak phases (clays, kerogen). In contrast, the average tensile strength of all shale-types appears to increase with increasing quartz fraction. The internal friction coefficient of all investigated shales decreases with increasing pressure and may approach rather high values (up to ≡ 1). Therefore, the mechanical strength and 9. Competition between weak localization and ballistic transport. PubMed Tian, Chushun 2009-06-19 High-frequency transport in perfect periodic dielectric cylinder arrays is studied. We analytically calculate the diffusive-ballistic transport crossover, which displays the competition between weak localization and ballistic transport. PMID:19659008 10. Sodium in weak G-band giants NASA Technical Reports Server (NTRS) Drake, Jeremy J.; Lambert, David L. 1994-01-01 Sodium abundances have been determined for eight weak G-band giants whose atmospheres are greatly enriched with products of the CN-cycling H-burning reactions. Systematic errors are minimized by comparing the weak G-band giants to a sample of similar but normal giants. If, further, Ca is selected as a reference element, model atmosphere-related errors should largely be removed. For the weak-G-band stars (Na/Ca) = 0.16 +/- 0.01, which is just possibly greater than the result (Na/Ca) = 0.10 /- 0.03 from the normal giants. This result demonstrates that the atmospheres of the weak G-band giants are not seriously contaminated with products of ON cycling. 11. Parametric instabilities in weakly magnetized plasma SciTech Connect Weatherall, J.C.; Goldman, M.V.; Nicholson, D.R. 1981-05-15 Parametric instabilities in a weakly magnetized plasma are discussed. The results are applied to waves excited by electron streams which travel outward from the Sun along solar-wind magnetic field lines, as in a type III solar radio burst. 12. Sensor/amplifier for weak light sources NASA Technical Reports Server (NTRS) Desmet, D. J.; Jason, A. J.; Parr, A. C. 1980-01-01 Light sensor/amplifier circuit detects weak light converts it into strong electrical signal in electrically noisy environment. Circuit is relatively simple and uses inexpensive, readily available components. Device is useful in such applications as fire detection and photographic processing. 13. Experimental studies of weakly coupled superconductors (Review) NASA Astrophysics Data System (ADS) Dmitrenko, I. M. 2004-07-01 A review is given of the main experimental results obtained in research on weakly coupled superconductors after 1964 at the Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine, Kharkov (ILTPE). 14. Experimental and numerical investigations of beryllium strength models using the Rayleigh-Taylor instability NASA Astrophysics Data System (ADS) Henry de Frahan, M. T.; Belof, J. L.; Cavallo, R. M.; Raevsky, V. A.; Ignatova, O. N.; Lebedev, A.; Ancheta, D. S.; El-dasher, B. S.; Florando, J. N.; Gallegos, G. F.; Johnsen, E.; LeBlanc, M. M. 2015-06-01 We present a set of high explosive driven Rayleigh-Taylor strength experiments for beryllium to produce data to distinguish predictions by various strength models. Design simulations using existing strength model parameterizations from Steinberg-Lund and Preston-Tonks-Wallace (PTW) suggested an optimal design that would delineate between not just different strength models, but different parameters sets of the PTW model. Application of the models to the post-shot results, however, suggests growth consistent with little material strength. We focus mostly on efforts to simulate the data using published strength models as well as the more recent RING relaxation model developed at VNIIEF. The results of the strength experiments indicate weak influence of strength in mitigating the growth with the RING model coming closest to predicting the material behavior. Finally, we present shock and ramp-loading recovery experiments. 15. Experimental and numerical investigations of beryllium strength models using the Rayleigh-Taylor instability SciTech Connect Henry de Frahan, M. T. Johnsen, E.; Belof, J. L.; Cavallo, R. M.; Ancheta, D. S.; El-dasher, B. S.; Florando, J. N.; Gallegos, G. F.; LeBlanc, M. M.; Raevsky, V. A.; Ignatova, O. N.; Lebedev, A. 2015-06-14 We present a set of high explosive driven Rayleigh-Taylor strength experiments for beryllium to produce data to distinguish predictions by various strength models. Design simulations using existing strength model parameterizations from Steinberg-Lund and Preston-Tonks-Wallace (PTW) suggested an optimal design that would delineate between not just different strength models, but different parameters sets of the PTW model. Application of the models to the post-shot results, however, suggests growth consistent with little material strength. We focus mostly on efforts to simulate the data using published strength models as well as the more recent RING relaxation model developed at VNIIEF. The results of the strength experiments indicate weak influence of strength in mitigating the growth with the RING model coming closest to predicting the material behavior. Finally, we present shock and ramp-loading recovery experiments. 16. Thigh Muscle Strength in Senior Athletes and Healthy Controls PubMed Central McCrory, Jean L; Salacinski, Amanda J; Hunt, Sarah E; Greenspan, Susan L 2016-01-01 Exercise is commonly recommended to counteract aging-related muscle weakness. While numerous exercise intervention studies on the elderly have been performed, few have included elite senior athletes, such as those who participate in the National Senior Games. The extent to which participation in highly competitive exercise affects muscle strength is unknown, as well as the extent to which such participation mitigates any aging-related strength losses. The purpose of this study was to examine isometric thigh muscle strength in selected athletes of the National Senior Games and healthy noncompetitive controls of similar age, as well as to investigate strength changes with aging in both groups. In all, 95 athletes of the Games and 72 healthy controls participated. Of the senior athletes, 43 were runners, 12 cyclists, and 40 swimmers. Three trials of isometric knee flexion and extension strength were collected using a load cell affixed to a custom-designed chair. Strength data were normalized to dual-energy x-ray absorptiometry-obtained lean mass of the leg. A 3-factor multivariate analysis of variance (group × gender × age group) was performed, which included both the extension and flexion variables ([alpha] = 0.05). Athletes exhibited 38% more extension strength and 66% more flexion strength than the controls (p < 0.001). Strength did not decrease with advancing age in either the athletes or the controls (p = 0.345). In conclusion, senior athletes who participate in highly competitive exercise have greater strength than healthy aged-matched individuals who do not. Neither group displayed the expected strength losses with aging. Our subject cohorts, however, were not typical of those over age 65 years because individuals with existing health conditions were excluded from the study. PMID:19972628 17. Some Topics in Weak and Electromagnetic Interactions NASA Astrophysics Data System (ADS) Bjorken, James D. 1982-01-01 The following sections are included: * INTRODUCTION * LECTURE I QUANTUM-ELECTRODYNAMICS TESTS; TESTS OF Jμ Jμ STRUCTURE IN WEAK INTERACTIONS; HIGHER-ORDER WEAK INTERACTIONS * LECTURE II PHENOMENOLOGY OF DEEP-INELASTIC PROCESSES; NO FINAL-STATE HADRONS OBSERVED * LECTURE III LIGHT-CONE COMMUTATORS; MODELS OF THE STRUCTURE FUNCTIONS * LECTURE IV HADRON FINAL STATES IN DEEP-INELASTIC PROCESSES; GENERAL CONSIDERATIONS * LECTURE V INCLUSIVE PROCESSES AT VERY HIGH TRANSVERSE MOMENTUM * REFERENCES 18. Deterministic implementation of weak quantum cubic nonlinearity SciTech Connect Marek, Petr; Filip, Radim; Furusawa, Akira 2011-11-15 We propose a deterministic implementation of weak cubic nonlinearity, which is a basic building block of a full-scale continuous-variable quantum computation. Our proposal relies on preparation of a specific ancillary state and transferring its nonlinear properties onto the desired target by means of deterministic Gaussian operations and feed forward. We show that, despite the imperfections arising from the deterministic nature of the operation, the weak quantum nonlinearity can be implemented and verified with the current level of technology. 19. Looking for heavier weak bosons with DUMAND NASA Technical Reports Server (NTRS) Brown, R. W.; Stecker, F. W. 1980-01-01 One or more heavier weak bosons may coexist with the standard weak boson, a broad program may be laid out for a search for the heavier W's via change in the total cross section due to the additional propagator, a concomitant search, and a subsequent search for significant antimatter in the universe involving the same annihilation, but being independent of possible neutrino oscillations. The program is likely to require detectors sensitive to higher energies, such as acoustic detectors. 20. Electromagnetic and Weak transitions in light nuclei SciTech Connect M. Viviani; L.E. Marcucci; A. Kievsky; S. Rosati; R. Schiavilla 2002-09-01 Recent advances in the study of the p -- d radiative and mu -- {sup 3}He weak capture processes by our group are presented and discussed. The trinucleon bound and scattering states have been obtained from variational calculations by expanding the corresponding wave functions in terms of correlated hyper-spherical harmonic functions. The electromagnetic and weak transition currents include one- and two-body operators. The accuracy achieved in these calculations allows for interesting comparisons with experimental data. 1. Atypical presentation of GNE myopathy with asymmetric hand weakness. PubMed de Dios, John Karl L; Shrader, Joseph A; Joe, Galen O; McClean, Jeffrey C; Williams, Kayla; Evers, Robert; Malicdan, May Christine V; Ciccone, Carla; Mankodi, Ami; Huizing, Marjan; McKew, John C; Bluemke, David A; Gahl, William A; Carrillo-Carrasco, Nuria 2014-12-01 GNE myopathy is a rare autosomal recessive muscle disease caused by mutations in GNE, the gene encoding the rate-limiting enzyme in sialic acid biosynthesis. GNE myopathy usually manifests in early adulthood with distal myopathy that progresses slowly and symmetrically, first involving distal muscles of the lower extremities, followed by proximal muscles with relative sparing of the quadriceps. Upper extremities are typically affected later in the disease. We report a patient with GNE myopathy who presented with asymmetric hand weakness. He had considerably decreased left grip strength, atrophy of the left anterior forearm and fibro-fatty tissue replacement of left forearm flexor muscles on T1-weighted magnetic resonance imaging. The patient was an endoscopist and thus the asymmetric hand involvement may be associated with left hand overuse in daily repetitive pinching and gripping movements, highlighting the possible impact of environmental factors on the progression of genetic muscle conditions. PMID:25182749 2. Atypical presentation of GNE myopathy with asymmetric hand weakness. PubMed de Dios, John Karl L; Shrader, Joseph A; Joe, Galen O; McClean, Jeffrey C; Williams, Kayla; Evers, Robert; Malicdan, May Christine V; Ciccone, Carla; Mankodi, Ami; Huizing, Marjan; McKew, John C; Bluemke, David A; Gahl, William A; Carrillo-Carrasco, Nuria 2014-12-01 GNE myopathy is a rare autosomal recessive muscle disease caused by mutations in GNE, the gene encoding the rate-limiting enzyme in sialic acid biosynthesis. GNE myopathy usually manifests in early adulthood with distal myopathy that progresses slowly and symmetrically, first involving distal muscles of the lower extremities, followed by proximal muscles with relative sparing of the quadriceps. Upper extremities are typically affected later in the disease. We report a patient with GNE myopathy who presented with asymmetric hand weakness. He had considerably decreased left grip strength, atrophy of the left anterior forearm and fibro-fatty tissue replacement of left forearm flexor muscles on T1-weighted magnetic resonance imaging. The patient was an endoscopist and thus the asymmetric hand involvement may be associated with left hand overuse in daily repetitive pinching and gripping movements, highlighting the possible impact of environmental factors on the progression of genetic muscle conditions. 3. Beyond strong and weak: rethinking postdictatorship civil societies. PubMed Riley, Dylan; Fernández, Juan J 2014-09-01 What is the impact of dictatorships on postdictatorial civil societies? Bottom-up theories suggest that totalitarian dictatorships destroy civil society while authoritarian ones allow for its development. Top-down theories of civil society suggest that totalitarianism can create civil societies while authoritarianism is unlikely to. This article argues that both these perspectives suffer from a one-dimensional understanding of civil society that conflates strength and autonomy. Accordingly we distinguish these two dimensions and argue that totalitarian dictatorships tend to create organizationally strong but heteronomous civil societies, while authoritarian ones tend to create relatively autonomous but organizationally weak civil societies. We then test this conceptualization by closely examining the historical connection between dictatorship and civil society development in Italy (a posttotalitarian case) and Spain (a postauthoritarian one). Our article concludes by reflecting on the implications of our argument for democratic theory, civil society theory, and theories of regime variation. PMID:25811069 4. Magnetophoresis of diamagnetic microparticles in a weak magnetic field. PubMed Zhu, Gui-Ping; Hejiazan, Majid; Huang, Xiaoyang; Nguyen, Nam-Trung 2014-12-21 Magnetic manipulation is a promising technique for lab-on-a-chip platforms. The magnetic approach can avoid problems associated with heat, surface charge, ionic concentration and pH level. The present paper investigates the migration of diamagnetic particles in a ferrofluid core stream that is sandwiched between two diamagnetic streams in a uniform magnetic field. The three-layer flow is expanded in a circular chamber for characterisation based on imaging of magnetic nanoparticles and fluorescent microparticles. A custom-made electromagnet generates a uniform magnetic field across the chamber. In a relatively weak uniform magnetic field, the diamagnetic particles in the ferrofluid move and spread across the chamber. Due to the magnetization gradient formed by the ferrofluid, diamagnetic particles undergo negative magnetophoresis and move towards the diamagnetic streams. The effects of magnetic field strength and the concentration of diamagnetic particles are studied in detail. PMID:25325774 5. Beyond strong and weak: rethinking postdictatorship civil societies. PubMed Riley, Dylan; Fernández, Juan J 2014-09-01 What is the impact of dictatorships on postdictatorial civil societies? Bottom-up theories suggest that totalitarian dictatorships destroy civil society while authoritarian ones allow for its development. Top-down theories of civil society suggest that totalitarianism can create civil societies while authoritarianism is unlikely to. This article argues that both these perspectives suffer from a one-dimensional understanding of civil society that conflates strength and autonomy. Accordingly we distinguish these two dimensions and argue that totalitarian dictatorships tend to create organizationally strong but heteronomous civil societies, while authoritarian ones tend to create relatively autonomous but organizationally weak civil societies. We then test this conceptualization by closely examining the historical connection between dictatorship and civil society development in Italy (a posttotalitarian case) and Spain (a postauthoritarian one). Our article concludes by reflecting on the implications of our argument for democratic theory, civil society theory, and theories of regime variation. 6. Dynamic Strength of Materials NASA Astrophysics Data System (ADS) Chhabildas, Lalit 2011-06-01 Historically when shock loading techniques became accessible in the early fifties it was assumed that materials behave like fluids implying that materials cannot support any shear stresses. Early and careful investigation in the sixties by G. R. Fowles in aluminum indicated otherwise. When he compared his Hugoniot compression measurements to hydrostatic pressure compression measurements in the pressure volume plane he noticed that the shock data lay above the hydrostatic compression curve - which laid the ground work for what is the basis for elastic-plastic theories that exist today. In this talk, a brief historical perspective on strength measurements in materials will be discussed including how time-resolved techniques have played a role in allowing estimates of the strength of materials at over Mbar stress. This is crucial especially at high stresses since we are determining values that are small compared to the loading stress. Even though we have made considerable progress in our understanding of materials, there are still many anomalies and unanswered questions. Some of these anomalies are fertile grounds for further and future research and will be mentioned. 7. Effect of Gender, Disease Duration and Treatment on Muscle Strength in Myasthenia Gravis PubMed Central Citirak, Gülsenay; Cejvanovic, Sanja; Andersen, Henning; Vissing, John 2016-01-01 Introduction The aim of this observational, cross-sectional study was to quantify the potential presence of muscle weakness among patients with generalized myasthenia gravis (gMG). The influence of gender, treatment intensity and disease duration on muscle strength and disease progression was also assessed. Methods Muscle strength was tested in 8 muscle groups by manual muscle testing and by hand-held dynamometry in 107 patients with gMG and 89 healthy age- and gender-matched controls. Disease duration, severity and treatment history were reviewed and compared with muscle strength. Results Patients had reduced strength in all tested muscle group compared to control subjects (p<0.05). Women with gMG were stronger than men (decrease in strength 22.6% vs. 32.7% in men, P<0.05). Regional differences in muscle weakness were also evident, with proximal muscles being more affected. Interestingly, muscle strength did not correlate with disease duration and treatment intensity. Conclusions The results of this study show that in patients with gMG; 1) there is significant muscle weakness, 2) muscle weakness is more pronounced in men than women, 3) shoulder abductors, hip flexors, and neck muscles are the most affected muscle groups and 4) disease duration or treatment intensity alone are not predictors of loss of muscle strength in gMG. PMID:27741232 8. Oscillator strengths and collision strengths for S III NASA Technical Reports Server (NTRS) Ho, Y. K.; Henry, R. J. W. 1984-01-01 The present calculation, in a close-coupled approximation for the energy range up to 1,000,000 K, yields collision strengths for the electron impact excitation of S III from the ground 3p2 3P state to the excited states 3s3p3 3D0, 3P0, 3S0, 3d 3D0, 3P0, and 4s 3P0. Also obtained are those transitions' oscillator strengths, and strengths for others involving 3p2 1D and 1S. Configuration-interaction target wave functions yielding oscillator strengths that are accurate to 20 percent are used in collision strength calculations. 9. Strength Training and Children's Health. ERIC Educational Resources Information Center Faigenbaum, Avery D. 2001-01-01 Provides an overview of the potential health benefits of strength training for children, discussing the role of strength training in preventing sports-related injuries and highlighting design considerations for such programs. The focus is on musculoskeletal adaptations to strength training that are observable in healthy children. Guidelines for… 10. Strength Development for Young Adolescents ERIC Educational Resources Information Center McDaniel, Larry W.; Jackson, Allen; Gaudet, Laura 2009-01-01 Participation in strength training is important for older children or young adolescences who wish to improve fitness or participate in sports. When designing strength training programs for our youth this age group is immature anatomically, physiologically, and psychologically. For the younger or inexperienced group the strength training activities… 11. The fracture strength and frictional strength of Weber Sandstone USGS Publications Warehouse Byerlee, J.D. 1975-01-01 The fracture strength and frictional strength of Weber Sandstone have been measured as a function of confining pressure and pore pressure. Both the fracture strength and the frictional strength obey the law of effective stress, that is, the strength is determined not by the confining pressure alone but by the difference between the confining pressure and the pore pressure. The fracture strength of the rock varies by as much as 20 per cent depending on the cement between the grains, but the frictional strength is independent of lithology. Over the range 0 2 kb, ??=0??5 + 0??6??n. This relationship also holds for other rocks such as gabbro, dunite, serpentinite, granite and limestone. ?? 1975. 12. Improving the payoffs of cooperators in three-player cooperative game using weak measurements NASA Astrophysics Data System (ADS) Liao, Xiang-Ping; Ding, Xiang-Zhuo; Fang, Mao-Fa 2015-12-01 In this paper, an efficient method is proposed to improve the payoffs of cooperators in cooperative three-player quantum game under the action of amplitude damping, bit flip and depolarizing channels using weak measurements. It is shown that the payoffs of cooperators can be enhanced to a great extent in the case of amplitude damping channel, and the payoff sudden death can be avoided in the case of bit flip and depolarizing channels. Moreover, the payoffs of cooperators tend to a constant by changing weak measurement strength in spite of sufficiently strong decoherence. 13. Using weak nonlinearity under decoherence for macroscopic entanglement generation and quantum computation SciTech Connect Jeong, Hyunseok 2005-09-15 Recently, there have been several suggestions that weak Kerr nonlinearity can be used for generation of macroscopic superpositions and entanglement and for linear optics quantum computation. However, it is not immediately clear that this approach can overcome decoherence effects. Our numerical study shows that nonlinearity of weak strength could be useful for macroscopic entanglement generation and quantum gate operations in the presence of decoherence. We suggest specific values for real experiments based on our analysis. Our discussion shows that the generation of macroscopic entanglement using this approach is within the reach of current technology. 14. Measurement of a weak transition moment using Coherent Control NASA Astrophysics Data System (ADS) Antypas, Dionysios We have developed a two-pathway Coherent Control technique for measurements of weak optical transition moments. We demonstrate this technique through a measurement of the transition moment of the highly-forbidden magnetic dipole transition between the 6s2S 1/21/2 and 7s2S 1/21/2 states in atomic Cesium. The experimental principle is based on a two-pathway excitation, using two phase-coherent laser fields, a fundamental field at 1079 nm and its second harmonic at 539.5 nm. The IR field induces a strong two-photon transition, while the 539.5 nm field drives a pair of weak one-photon transitions: a Stark-induced transition of controllable strength as well as the magnetic dipole transition. Observations of the interference between these transitions for different Stark-induced transition amplitudes, allow a measurement of the ratio of the magnetic dipole to the Stark-induced moment. The interference between the transitions is controlled by modulation of the phase-delay between the two optical fields. Our determination of the magnetic dipole moment is at the 0.4% level and in good agreement with previous measurements, and serves as a benchmark for our technique and apparatus. We anticipate that with further improvement of the apparatus detection sensitivity, the demonstrated scheme can be used for measurements of the very weak Parity Violation transition moment on the Cesium 6s2 S1/2→7s2 S1/2 transition. 15. Failure strength of icy lithospheres NASA Technical Reports Server (NTRS) Golombek, M. P.; Banerdt, W. B. 1987-01-01 Lithospheric strengths derived from friction on pre-existing fractures and ductile flow laws show that the tensile strength of intact ice under applicable conditions is actually an order of magnitude stronger than widely assumed. It is demonstrated that this strength is everywhere greater than that required to initiate frictional sliding on pre-existing fractures and faults. Because the tensile strength of intact ice increases markedly with confining pressure, it actually exceeds the frictional strength at all depths. Thus, icy lithospheres will fail by frictional slip along pre-existing fractures at yeild stresses greater than previously assumed rather than opening tensile cracks in intact ice. 16. High Strength Development at Incompatible Semicrystalline Polymer-Polymer Interfaces NASA Astrophysics Data System (ADS) Hong, C. H.; Wool, Richard 2007-03-01 For incompatible A/B interfaces, the strength G1c is related to the equilibrium width w (normalized to the tube diameter) of the interface by G1c/G* = (w-1), where G* is the virgin strength [R.P. Wool, C.R, Chimie, 9 (2006) 25]. However, the interface strength is quite weak due to very limited interdiffusion. The mechanism of high strength development of a series of thermoplastic polyurethane elastomers (TPU) bonding with ethylene vinyl alcohol copolymers (EVOH) was investigated. During cool down of the A/B interface in the co-extruded melt, there exits a unique process window---the α-β window-which promotes considerable strength development. We used the differences in melting points and the volume contraction during asymmetric crystallization to generate influxes (σ nano-nails/unit area), where an influx occurs by the fluid being pulled into the crystallizing side. TPU samples with higher degree of crystallization typically exhibited higher peel strengths, due to the formation of both inter- and intra- spherulitic influxes of nano-dimension across the interface. The peel energy now behaves as G1c˜ σL^2, where L is the length of the influx and L>>w. Annealing between the α and βt relaxation temperatures of the EVOH generated additional influxes which provided significant connectivity and peel strength. 17. Gaussian discriminating strength NASA Astrophysics Data System (ADS) Rigovacca, L.; Farace, A.; De Pasquale, A.; Giovannetti, V. 2015-10-01 We present a quantifier of nonclassical correlations for bipartite, multimode Gaussian states. It is derived from the Discriminating Strength measure, introduced for finite dimensional systems in Farace et al., [New J. Phys. 16, 073010 (2014), 10.1088/1367-2630/16/7/073010]. As the latter the new measure exploits the quantum Chernoff bound to gauge the susceptibility of the composite system with respect to local perturbations induced by unitary gates extracted from a suitable set of allowed transformations (the latter being identified by posing some general requirements). Closed expressions are provided for the case of two-mode Gaussian states obtained by squeezing or by linearly mixing via a beam splitter a factorized two-mode thermal state. For these density matrices, we study how nonclassical correlations are related with the entanglement present in the system and with its total photon number. 18. High strength ferritic alloy DOEpatents Hagel, William C.; Smidt, Frederick A.; Korenko, Michael K. 1977-01-01 A high-strength ferritic alloy useful for fast reactor duct and cladding applications where an iron base contains from about 9% to about 13% by weight chromium, from about 4% to about 8% by weight molybdenum, from about 0.2% to about 0.8% by weight niobium, from about 0.1% to about 0.3% by weight vanadium, from about 0.2% to about 0.8% by weight silicon, from about 0.2% to about 0.8% by weight manganese, a maximum of about 0.05% by weight nitrogen, a maximum of about 0.02% by weight sulfur, a maximum of about 0.02% by weight phosphorous, and from about 0.04% to about 0.12% by weight carbon. 19. Hadronic Weak Interaction Studies at the SNS NASA Astrophysics Data System (ADS) 2016-03-01 Neutrons have been a useful probe in many fields of science, as well as an important physical system for study in themselves. Modern neutron sources provide extraordinary opportunities to study a wide variety of physics topics. Among them is a detailed study of the weak interaction. An overview of studies of the hadronic weak (quark-quark) as well as semi-leptonic (quark-lepton) interactions at the Spallation Neutron Source (SNS) is presented. These measurements, done in few-nucleon systems, are finally letting us gain knowledge of the hadronic weak interaction without the contributions from nuclear effects. Forthcoming results from the NPDGamma experiment will, due to the simplicity of the neutron, provide an unambiguous measurement of the long range pion-nucleon weak coupling (often referred to as hπ), which will finally test the theoretical predictions. Results from NPDGamma and future results from the n +3 He experiment will need to be complemented by additional measurements to completely describe the hadronic weak interaction. 20. Explaining numeracy development in weak performing kindergartners. PubMed Toll, Sylke W M; Van Luit, Johannes E H 2014-08-01 Gaining better insight into precursors of early numeracy in young children is important, especially in those with inadequate numeracy skills. Therefore, in the current study, visual and verbal working memory, non-symbolic and symbolic comparison skills, and specific math-related language were used to explain early numeracy performance and development of weak performing children throughout kindergarten. The early numeracy ability of both weak performers and typical performers was measured at four time points during 2 years of kindergarten to compare growth rates. Results show a significant faster development of early numeracy in the weak performers. The development of weak performers' numeracy was influenced by verbal working memory, symbolic comparison skills, and math language, whereas only math language was positively related to the slope of typical performers' numeracy. In the weak performers, visual working memory, non-symbolic comparison skills, and math language showed an effect on the initial early numeracy level of these children. The intercept of the typical performers was predicted by five covariates, all except non-symbolic comparison. PMID:24786672 1. DRAM Weak Cell Characterization for Retention Time. PubMed Kang, Jonghyuk; Lee, Sungho; Choi, Byoungdeog 2016-05-01 This work proposes a sequence of tests for detecting refresh weak cells based on data retention time distribution in the main cell array of DRAMs and verify the feasibility of the proposed method through analysis of 30 nm design-rule DRAM cells with Recess Channel Array Transistor (RCAT) and Buried Channel Array Transistor (BCAT). Basic idea of the proposed mechanism is to test with different bias conditions and break down retention failures based on their root causes such as Gate Induced Drain Leakage, sub-threshold leakage and junction leakage. This categorization helps to determine the physical locations of each failure group, enabling precise Physical Failure Analysis (PFA). The characterization of data retention weak cells for 30 nm design rule DRAMs with BCAT and RCAT has been investigated. Most weak cells were classified as GIDL leaky cells in both cases. In the case of BCAT, the distance between the word line and the storage node, caused by the process distribution, is the main origin of weak cells. In the case of RCAT, the sharp corner of the active layer in the storage node is the main cause of weak cells. PMID:27483878 2. Faithful conditional quantum state transfer between weakly coupled qubits NASA Astrophysics Data System (ADS) Miková, M.; Straka, I.; Mičuda, M.; Krčmarský, V.; Dušek, M.; Ježek, M.; Fiurášek, J.; Filip, R. 2016-08-01 One of the strengths of quantum information theory is that it can treat quantum states without referring to their particular physical representation. In principle, quantum states can be therefore fully swapped between various quantum systems by their mutual interaction and this quantum state transfer is crucial for many quantum communication and information processing tasks. In practice, however, the achievable interaction time and strength are often limited by decoherence. Here we propose and experimentally demonstrate a procedure for faithful quantum state transfer between two weakly interacting qubits. Our scheme enables a probabilistic yet perfect unidirectional transfer of an arbitrary unknown state of a source qubit onto a target qubit prepared initially in a known state. The transfer is achieved by a combination of a suitable measurement of the source qubit and quantum filtering on the target qubit depending on the outcome of measurement on the source qubit. We experimentally verify feasibility and robustness of the transfer using a linear optical setup with qubits encoded into polarization states of single photons. 3. Faithful conditional quantum state transfer between weakly coupled qubits. PubMed Miková, M; Straka, I; Mičuda, M; Krčmarský, V; Dušek, M; Ježek, M; Fiurášek, J; Filip, R 2016-01-01 One of the strengths of quantum information theory is that it can treat quantum states without referring to their particular physical representation. In principle, quantum states can be therefore fully swapped between various quantum systems by their mutual interaction and this quantum state transfer is crucial for many quantum communication and information processing tasks. In practice, however, the achievable interaction time and strength are often limited by decoherence. Here we propose and experimentally demonstrate a procedure for faithful quantum state transfer between two weakly interacting qubits. Our scheme enables a probabilistic yet perfect unidirectional transfer of an arbitrary unknown state of a source qubit onto a target qubit prepared initially in a known state. The transfer is achieved by a combination of a suitable measurement of the source qubit and quantum filtering on the target qubit depending on the outcome of measurement on the source qubit. We experimentally verify feasibility and robustness of the transfer using a linear optical setup with qubits encoded into polarization states of single photons. PMID:27562544 4. Faithful conditional quantum state transfer between weakly coupled qubits PubMed Central Miková, M.; Straka, I.; Mičuda, M.; Krčmarský, V.; Dušek, M.; Ježek, M.; Fiurášek, J.; Filip, R. 2016-01-01 One of the strengths of quantum information theory is that it can treat quantum states without referring to their particular physical representation. In principle, quantum states can be therefore fully swapped between various quantum systems by their mutual interaction and this quantum state transfer is crucial for many quantum communication and information processing tasks. In practice, however, the achievable interaction time and strength are often limited by decoherence. Here we propose and experimentally demonstrate a procedure for faithful quantum state transfer between two weakly interacting qubits. Our scheme enables a probabilistic yet perfect unidirectional transfer of an arbitrary unknown state of a source qubit onto a target qubit prepared initially in a known state. The transfer is achieved by a combination of a suitable measurement of the source qubit and quantum filtering on the target qubit depending on the outcome of measurement on the source qubit. We experimentally verify feasibility and robustness of the transfer using a linear optical setup with qubits encoded into polarization states of single photons. PMID:27562544 5. Faithful conditional quantum state transfer between weakly coupled qubits. PubMed Miková, M; Straka, I; Mičuda, M; Krčmarský, V; Dušek, M; Ježek, M; Fiurášek, J; Filip, R 2016-08-26 One of the strengths of quantum information theory is that it can treat quantum states without referring to their particular physical representation. In principle, quantum states can be therefore fully swapped between various quantum systems by their mutual interaction and this quantum state transfer is crucial for many quantum communication and information processing tasks. In practice, however, the achievable interaction time and strength are often limited by decoherence. Here we propose and experimentally demonstrate a procedure for faithful quantum state transfer between two weakly interacting qubits. Our scheme enables a probabilistic yet perfect unidirectional transfer of an arbitrary unknown state of a source qubit onto a target qubit prepared initially in a known state. The transfer is achieved by a combination of a suitable measurement of the source qubit and quantum filtering on the target qubit depending on the outcome of measurement on the source qubit. We experimentally verify feasibility and robustness of the transfer using a linear optical setup with qubits encoded into polarization states of single photons. 6. Bats respond to very weak magnetic fields. PubMed Tian, Lan-Xiang; Pan, Yong-Xin; Metzner, Walter; Zhang, Jin-Shuo; Zhang, Bing-Fang 2015-01-01 How animals, including mammals, can respond to and utilize the direction and intensity of the Earth's magnetic field for orientation and navigation is contentious. In this study, we experimentally tested whether the Chinese Noctule, Nyctalus plancyi (Vespertilionidae) can sense magnetic field strengths that were even lower than those of the present-day geomagnetic field. Such field strengths occurred during geomagnetic excursions or polarity reversals and thus may have played an important role in the evolution of a magnetic sense. We found that in a present-day local geomagnetic field, the bats showed a clear preference for positioning themselves at the magnetic north. As the field intensity decreased to only 1/5th of the natural intensity (i.e., 10 μT; the lowest field strength tested here), the bats still responded by positioning themselves at the magnetic north. When the field polarity was artificially reversed, the bats still preferred the new magnetic north, even at the lowest field strength tested (10 μT), despite the fact that the artificial field orientation was opposite to the natural geomagnetic field (P<0.05). Hence, N. plancyi is able to detect the direction of a magnetic field even at 1/5th of the present-day field strength. This high sensitivity to magnetic fields may explain how magnetic orientation could have evolved in bats even as the Earth's magnetic field strength varied and the polarity reversed tens of times over the past fifty million years. 7. Weak lensing in the Dark Energy Survey NASA Astrophysics Data System (ADS) Troxel, Michael 2016-03-01 I will present the current status of weak lensing results from the Dark Energy Survey (DES). DES will survey 5000 square degrees in five photometric bands (grizY), and has already provided a competitive weak lensing catalog from Science Verification data covering just 3% of the final survey footprint. I will summarize the status of shear catalog production using observations from the first year of the survey and discuss recent weak lensing science results from DES. Finally, I will report on the outlook for future cosmological analyses in DES including the two-point cosmic shear correlation function and discuss challenges that DES and future surveys will face in achieving a control of systematics that allows us to take full advantage of the available statistical power of our shear catalogs. 8. Composition of weakly altered Martian crust NASA Technical Reports Server (NTRS) Mustard, J. F.; Murchie, S. L.; Erard, S. 1993-01-01 The mineralogic and chemical composition of weakly altered crust remains an unresolved question for Mars. Dark regions hold clues to the composition since they are thought to comprise surface exposures of weakly altered crustal materials. Understanding the in situ composition of relatively pristine crustal rocks in greater detail is important for investigating basic volcanic processes. Also, this will provide additional constraints on the chemical pathways by which pristine rocks are altered to produce the observed ferric iron-bearing assemblages and inferred clay silicate, sulphate, and magnetic oxide phases. Reflectance spectra of dark regions obtained with the ISM instrument are being used to determine the basic mineralogy of weakly altered crust for a variety of regions on Mars. 9. Weak self-adjoint differential equations NASA Astrophysics Data System (ADS) Gandarias, M. L. 2011-07-01 The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006 J. Math. Anal. Appl. 318 742-57 2007 Arch. ALGA 4 55-60). In Ibragimov (2007 J. Math. Anal. Appl. 333 311-28), a general theorem on conservation laws was proved. In this paper, we generalize the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. We find a class of weak self-adjoint quasi-linear parabolic equations. The property of a differential equation to be weak self-adjoint is important for constructing conservation laws associated with symmetries of the differential equation. 10. Systematics of strength function sum rules SciTech Connect Johnson, Calvin W. 2015-08-28 Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens, violation of the generalized Brink–Axel hypothesis is unsurprising: one expectssum rules to evolve with excitation energy. Moreover, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive) or negative (attractive). 11. Systematics of strength function sum rules DOE PAGES Johnson, Calvin W. 2015-08-28 Sum rules provide useful insights into transition strength functions and are often expressed as expectation values of an operator. In this letter I demonstrate that non-energy-weighted transition sum rules have strong secular dependences on the energy of the initial state. Such non-trivial systematics have consequences: the simplification suggested by the generalized Brink–Axel hypothesis, for example, does not hold for most cases, though it weakly holds in at least some cases for electric dipole transitions. Furthermore, I show the systematics can be understood through spectral distribution theory, calculated via traces of operators and of products of operators. Seen through this lens,more » violation of the generalized Brink–Axel hypothesis is unsurprising: one expectssum rules to evolve with excitation energy. Moreover, to lowest order the slope of the secular evolution can be traced to a component of the Hamiltonian being positive (repulsive) or negative (attractive).« less 12. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory ERIC Educational Resources Information Center Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca 2013-01-01 A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis… 13. Extrapolating Weak Selection in Evolutionary Games PubMed Central Wu, Bin; García, Julián; Hauert, Christoph; Traulsen, Arne 2013-01-01 In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection. PMID:24339769 14. Electro-weak reactions for astrophysics SciTech Connect R. Schiavilla 2000-06-01 The status of ''ab initio'' microscopic calculations of the {sup 2}H(p,{gamma}){sup 3}He and {sup 3}He(p,e{sup +}{nu}{sub e}){sup 4}He reactions is reviewed. The methods used to generate accurate nuclear ground- and scattering-state wave functions, and to construct realistic electro-weak transition operators are described. The uncertainties in the theoretical predictions, particularly those relevant to the p-{sup 3}He weak capture, are discussed. For the dp radiative capture, the theoretical results are compared with the TUNL data in the energy range 0--100 keV. 15. Spectroscopy of a weakly isolated horizon NASA Astrophysics Data System (ADS) Chen, Ge-Rui; Huang, Yong-Chang 2016-06-01 The spectroscopy of a weakly isolated horizon has been investigated. We obtain an equally spaced entropy spectrum with its quantum equal to the one given by Bekenstein (Phys Rev D 7:2333, 1973). We demonstrate that the quantization of entropy and area is a generic property of horizons which exists in a wide class of spacetimes admitting weakly isolated horizons. Our method based on the tunneling method also indicates that the entropy quantum of black hole horizons is closely related to Hawking temperature. 16. Weak Lie symmetry and extended Lie algebra SciTech Connect Goenner, Hubert 2013-04-15 The concept of weak Lie motion (weak Lie symmetry) is introduced. Applications given exhibit a reduction of the usual symmetry, e.g., in the case of the rotation group. In this context, a particular generalization of Lie algebras is found ('extended Lie algebras') which turns out to be an involutive distribution or a simple example for a tangent Lie algebroid. Riemannian and Lorentz metrics can be introduced on such an algebroid through an extended Cartan-Killing form. Transformation groups from non-relativistic mechanics and quantum mechanics lead to such tangent Lie algebroids and to Lorentz geometries constructed on them (1-dimensional gravitational fields). 17. Simple understanding of quantum weak values NASA Astrophysics Data System (ADS) Qin, Lupei; Feng, Wei; Li, Xin-Qi 2016-02-01 In this work we revisit the important and controversial concept of quantum weak values, aiming to provide a simplified understanding to its associated physics and the origin of anomaly. Taking the Stern-Gerlach setup as a working system, we base our analysis on an exact treatment in terms of quantum Bayesian approach. We also make particular connection with a very recent work, where the anomaly of the weak values was claimed from the pure statistics in association with “disturbance” and “post-selection”, rather than the unique quantum nature. Our analysis resolves the related controversies through a clear and quantitative way. 18. Simple understanding of quantum weak values PubMed Central Qin, Lupei; Feng, Wei; Li, Xin-Qi 2016-01-01 In this work we revisit the important and controversial concept of quantum weak values, aiming to provide a simplified understanding to its associated physics and the origin of anomaly. Taking the Stern-Gerlach setup as a working system, we base our analysis on an exact treatment in terms of quantum Bayesian approach. We also make particular connection with a very recent work, where the anomaly of the weak values was claimed from the pure statistics in association with “disturbance” and “post-selection”, rather than the unique quantum nature. Our analysis resolves the related controversies through a clear and quantitative way. PMID:26838670 19. Compressive wavefront sensing with weak values. PubMed Howland, Gregory A; Lum, Daniel J; Howell, John C 2014-08-11 We demonstrate a wavefront sensor that unites weak measurement and the compressive-sensing, single-pixel camera. Using a high-resolution spatial light modulator (SLM) as a variable waveplate, we weakly couple an optical field's transverse-position and polarization degrees of freedom. By placing random, binary patterns on the SLM, polarization serves as a meter for directly measuring random projections of the wavefront's real and imaginary components. Compressive-sensing optimization techniques can then recover the wavefront. We acquire high quality, 256 × 256 pixel images of the wavefront from only 10,000 projections. Photon-counting detectors give sub-picowatt sensitivity. 20. Critical level statistics for weakly disordered graphene. PubMed Amanatidis, E; Kleftogiannis, I; Katsanos, D E; Evangelou, S N 2014-04-16 In two dimensions chaotic level statistics with the Wigner spacing distribution P(S) is expected for massless fermions in the Dirac region. The obtained P(S) for weakly disordered finite graphene samples with zigzag edges turns out, however, to be neither chaotic (Wigner) nor localized (Poisson). It is similar to the intermediate statistics at the critical point of the Anderson metal-insulator transition. The quantum transport of finite graphene for weak disorder, with critical level statistics can occur via edge states as in topological insulators, and for strong disorder, graphene behaves as an ordinary Anderson insulator with Poisson statistics. 1. The molecular basis of skeletal muscle weakness in a mouse model of inflammatory myopathy PubMed Central Coley, William; Rayavarapu, Sree; Pandey, Gouri S.; Sabina, Richard L.; vander Meulen, Jack H.; Ampong, Beryl; Wortmann, Robert L.; Rawat, Rashmi; Nagaraju, Kanneboyina 2012-01-01 OBJECTIVE It is generally believed that muscle weakness in patients with polymyositis and dermatomyositis is due to autoimmune and inflammatory processes. However, it has been observed that there is a poor correlation between the suppression of inflammation and a recovery of muscle function in patients. We have therefore hypothesized that non-immune mechanisms also contribute to muscle weakness. In particular, it has been suggested that an acquired deficiency of AMP deaminase (AMPD1) may be responsible for muscle weakness in myositis. METHODS We have used comprehensive functional, behavioral, histological, molecular, enzymatic and metabolic assessments before and after the onset of inflammation in MHC class I mouse model of autoimmune inflammatory myositis. RESULTS We found that muscle weakness and metabolic disturbances were detectable in the mice prior to the appearance of infiltrating mononuclear cells. Force contraction analysis of muscle function revealed that weakness was correlated with AMDP1 expression and was myositis-specific. We also demonstrated that decreasing AMPD1 expression results in decreased muscle strength in healthy mice. Fiber typing suggested that fast-twitch muscles are converted to slow-twitch muscles as myositis progresses, and microarray results indicated that AMPD1 and other purine nucleotide pathway genes are suppressed, along with genes essential to glycolysis. CONCLUSION These data suggest that an AMPD1 deficiency is acquired prior to overt muscle inflammation and is responsible, at least in part, for the muscle weakness that occurs in the mouse model of myositis. AMPD1 is therefore a potential therapeutic target in myositis. PMID:22806328 2. Lithospheric strength variations in Mainland China: tectonic implications NASA Astrophysics Data System (ADS) Deng, Y.; Tesauro, M. 2015-12-01 We present new thermal and strength models of Mainland China. We integrate thermal model for the crust, using a 3D steady-state heat conduction equation, with estimates for the upper mantle thermal structure obtained by inverting an S-wave tomography model. Using the new thermal model and attributing to the lithospheric layers a 'soft' and 'hard' rheology, respectively, we estimate the integrated strength of the lithosphere. In the Ordos and the Sichuan basins, characterized by intermediate temperatures, strength is primarily concentrated in the crust, when the rheology is 'soft', and in both the crust and upper mantle, when the rheology is 'hard'. In turn, the Tibetan Plateau and the Tarim basin have a weak/strong lithosphere mainly on account of their high/low temperatures. Deep earthquakes releasing high seismic energy, occurring beneath Tien Shan orogen, may be related to the brittle failure of anhydrous granulite-faciesrocks composing its lower crust. In contrast, the fluids released by the Indian slab favor the triggering of earthquakes located in the deep crust of south Tibet. Comparison of temperatures, strength and effective viscosity variations with the earthquakes distribution and their seismic energy released indicates that both the deep part of the crust and the upper mantle of the Tibetan Plateau are weak and prone to flow towards the adjacent areas. On account of the high strength of some of the tectonic domains surrounding Tibet, the flow is directed northward beneath the Qaidam basin and turns south of the Sichuan basin, moving toward the weak South China block. 3. Strength of Chemical Bonds NASA Technical Reports Server (NTRS) Christian, Jerry D. 1973-01-01 Students are not generally made aware of the extraordinary magnitude of the strengths of chemical bonds in terms of the forces required to pull them apart. Molecular bonds are usually considered in terms of the energies required to break them, and we are not astonished at the values encountered. For example, the Cl2 bond energy, 57.00 kcal/mole, amounts to only 9.46 x 10(sup -20) cal/molecule, a very small amount of energy, indeed, and impossible to measure directly. However, the forces involved in realizing the energy when breaking the bond operate over a very small distance, only 2.94 A, and, thus, f(sub ave) approx. equals De/(r - r(sub e)) must be very large. The forces involved in dissociating the molecule are discussed in the following. In consideration of average forces, the molecule shall be assumed arbitrarily to be dissociated when the atoms are far enough separated so that the potential, relative to that of the infinitely separated atoms, is reduced by 99.5% from the potential of the molecule at the equilibrium bond length (r(sub e)) for Cl2 of 1.988 A this occurs at 4.928 A. 4. Cesium oscillator strengths measured with a multiple-path-length absorption cell NASA Technical Reports Server (NTRS) Exton, R. J. 1976-01-01 Absorption-oscillator-strength measurements for the principal series in cesium were measured using a multiple-path-length cell. The optical arrangement included a movable transverse path for checking the uniformity of the alkali density along the length of the cell and which also allowed strength measurements to be made simultaneously on both strong and weak lines. The strengths measured on the first 10 doublets indicate an increasing trend in the doublet ratio. The individual line strengths are in close agreement with the high resolution measurements of Pichler (1974) and with the calculations of Norcross (1973). 5. Muscle weakness is related to slip-initiated falls among community-dwelling older adults. PubMed Ding, Li; Yang, Feng 2016-01-25 The purposes of this study were (1) to investigate the relationship between muscle weakness and slip-related falls among community-dwelling older adults, and (2) to determine optimal cut-off values with respect to the knee strength capacity which can be used to identify individuals at high risk of falls. Thirty-six healthy older adults participated in this study. Their muscle strength (torque) was assessed at the right knee under maximum voluntary isometric (flexion and extension) contractions. They were then moved to a special treadmill. After walking regularly five times on the treadmill, they experienced an identical and unannounced slip during walking on the treadmill with the protection of a safety harness. This treadmill could be considered a standardized platform, inducing an unexpected slip. Accuracy of predicting slip outcome (fall vs. recovery) was examined for both strength measurements (i.e., the strength capacity of knee extensor and flexor) using univariate logistic regressions. The optimal cutoff values for the two strength measurements were determined by the receiver operating characteristic analysis. Results showed that fallers displayed significantly lower knee strength capacities compared to their recovery counterpart (1.10 vs. 1.44Nm/kg, p<0.01, effect size Cohen׳s d=0.95 for extensor; 0.93 vs. 1.13Nm/kg, p<0.05, d=0.69 for flexor). Such results suggested that muscle weakness contributes to falls initiated by a slip during gait. Our findings could provide guidance to identify individuals at increased risk of falling using the derived optimal cutoff values of knee strength capacity among older adults. 6. Reaction zone structure for strong, weak overdriven, and weak underdriven oblique detonations NASA Technical Reports Server (NTRS) Powers, Joseph M.; Gonthier, Keith A. 1992-01-01 A simple dynamic systems analysis is used to give examples of strong, weak overdriven, and weak underdriven oblique detonations. Steady oblique detonations consisting of a straight lead shock attached to a solid wedge followed by a resolved reaction zone structure are admitted as solutions to the reactive Euler equations. This is demonstrated for a fluid that is taken to be an inviscid, calorically perfect ideal gas that undergoes a two-step irreversible reaction with the first step exothermic and the second step endothermic. This model admits solutions for a continuum of shock wave angles for two classes of solutions identified by a Rankine-Hugoniot analysis: strong and weak overdriven waves. The other class, weak underdriven, is admitted for eigenvalue shock-wave angles. Chapman-Jouguet waves, however, are not admitted. These results contrast those for a corresponding onestep model that, for detonations with a straight lead shock, only admits strong, weak overdriven, and Chapman-Jouguet solutions. 7. [A strong man with a weak shoulder]. PubMed Henket, Marjolijn; Lycklama á Nijeholt, Geert J; van der Zwaal, Peer 2013-01-01 A 47-year-old former olympic athlete had pain and weakness of his left shoulder. There was no prior trauma. He had full range-of-motion and a scapular dyskinesia. There was atrophy of the trapezius and sternocleidomastoideus muscles. He was diagnosed with 'idiopathic neuritis of the accessorius nerve'. PMID:24326139 8. Interaction barriers for light, weakly bound projectiles SciTech Connect Kolata, J. J.; Aguilera, E. F. 2009-02-15 A parametrization of the interaction-barrier model of C. Y. Wong [Phys. Rev. Lett. 31, 766 (1973)] is given for light, weakly bound projectiles and also for the exotic 'halo' nuclei {sup 6}He and {sup 8}B. Comparisons are made with the original parametrization. The extremely anomalous behavior of the interaction radius and barrier curvature for halo nuclei is discussed. 9. Resource Letter WI-1: Weak Interactions ERIC Educational Resources Information Center Holstein, Barry R. 1977-01-01 Provides a listing of sources of literature and teaching aids to improve course content in the fields of: weak interactions, beta decay, orbital electron capture, muon capture, semileptonic decay, nonleptonic processes, parity violation in nuclei, neutrino physics, and parity violation in atomic physics. (SL) 10. Weak radiative baryonic decays of B mesons SciTech Connect Kohara, Yoji 2004-11-01 Weak radiative baryonic B decays B{yields}B{sub 1}B{sub 2}-bar{gamma} are studied under the assumption of the short-distance b{yields}s{gamma} electromagnetic penguin transition dominance. The relations among the decay rates of various decay modes are derived. 11. Quantum Signature Scheme with Weak Arbitrator NASA Astrophysics Data System (ADS) Luo, Ming-Xing; Chen, Xiu-Bo; Yun, Deng; Yang, Yi-Xian 2012-07-01 In this paper, we propose one quantum signature scheme with a weak arbitrator to sign classical messages. This scheme can preserve the merits in the original arbitrated scheme with some entanglement resources, and provide a higher efficiency in transmission and reduction the complexity of implementation. The arbitrator is costless and only involved in the disagreement case. 12. 7 CFR 51.894 - Weak. Code of Federal Regulations, 2014 CFR 2014-01-01 ... 7 Agriculture 2 2014-01-01 2014-01-01 false Weak. 51.894 Section 51.894 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946 AND THE EGG PRODUCTS INSPECTION ACT... 13. Weak shock wave reflection from concave surfaces NASA Astrophysics Data System (ADS) Gruber, Sebastien; Skews, Beric 2013-07-01 The reflection of very weak shock waves from concave curved surfaces has not been well documented in the past, and recent studies have shown the possible existence of a variation in the accepted reflection configuration evolution as a shock wave encounters an increasing gradient on the reflecting surface. The current study set out to investigate this anomaly using high-resolution photography. Shock tube tests were done on various concave circular and parabolic geometries, all with zero initial ramp angle. Although the results have limitations due to the achievable image resolution, the results indicate that for very weak Mach numbers, M S < 1.1, there may be a region in which the reflection configuration resembles that of a regular reflection, unlike for the stronger shock wave case. This region exists after the triple point of the Mach reflection meets the reflecting surface and prior to the formation of the additional shock structures that represent a transitioned regular reflection. The Mach and transitioned regular reflections at 1.03 < M s < 1.05 also exhibit no signs of a visible shear layer, or a clear discontinuity at the triple point, and are thus also apparently different in the weak shock regime than what has been described for stronger shocks, similar to what has been shown for weak shocks reflecting off a plane wedge. 14. Quantum trajectories based on the weak value NASA Astrophysics Data System (ADS) Mori, Takuya; Tsutsui, Izumi 2015-04-01 The notion of the trajectory of an individual particle is strictly inhibited in quantum mechanics because of the uncertainty principle. Nonetheless, the weak value, which has been proposed as a novel and measurable quantity definable to any quantum observable, can offer a possible description of trajectory on account of its statistical nature. In this paper, we explore the physical significance provided by this "weak trajectory" by considering various situations where interference takes place simultaneously with the observation of particles, that is, in prototypical quantum situations for which no classical treatment is available. These include the double slit experiment and Lloyd's mirror, where in the former case it is argued that the real part of the weak trajectory describes an average over the possible classical trajectories involved in the process, and that the imaginary part is related to the variation of interference. It is shown that this average interpretation of the weak trajectory holds universally under the complex probability defined from the given transition process. These features remain essentially unaltered in the case of Lloyd's mirror where interference occurs with a single slit. 15. Transitivity, weakly mixing property and chaos NASA Astrophysics Data System (ADS) Wang, Lidong; Liang, Jianhua; Wang, Yiyi; Sun, Xuelian 2016-01-01 Let X be a compact metric space without isolated points and let f : X → X be a continuous map. In this paper, if (X,f) is a transitive dynamical system with a repelling periodic point, then f is chaotic in the sense of Kato. In addition, if f is weakly topologically mixing, then f is chaotic in the strong sense of Kato. 16. Molecular Handshake: Recognition through Weak Noncovalent Interactions ERIC Educational Resources Information Center Murthy, Parvathi S. 2006-01-01 The weak noncovalent interactions between substances, the handshake in the form of electrostatic interactions, van der Waals' interactions or hydrogen bonding is universal to all living and nonliving matter. They significantly influence the molecular and bulk properties and behavior of matter. Their transient nature affects chemical reactions and… 17. Modeling, Measuring, and Compensating Color Weak Vision NASA Astrophysics Data System (ADS) Oshima, Satoshi; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui 2016-06-01 We use methods from Riemann geometry to investigate transformations between the color spaces of color-normal and color weak observers. The two main applications are the simulation of the perception of a color weak observer for a color normal observer and the compensation of color images in a way that a color weak observer has approximately the same perception as a color normal observer. The metrics in the color spaces of interest are characterized with the help of ellipsoids defined by the just-noticable-differences between color which are measured with the help of color-matching experiments. The constructed mappings are isometries of Riemann spaces that preserve the perceived color-differences for both observers. Among the two approaches to build such an isometry, we introduce normal coordinates in Riemann spaces as a tool to construct a global color-weak compensation map. Compared to previously used methods this method is free from approximation errors due to local linearizations and it avoids the problem of shifting locations of the origin of the local coordinate system. We analyse the variations of the Riemann metrics for different observers obtained from new color matching experiments and describe three variations of the basic method. The performance of the methods is evaluated with the help of semantic differential (SD) tests. 18. Chaotic weak chimeras and their persistence in coupled populations of phase oscillators NASA Astrophysics Data System (ADS) Bick, Christian; Ashwin, Peter 2016-05-01 Nontrivial collective behavior may emerge from the interactive dynamics of many oscillatory units. Chimera states are chaotic patterns of spatially localized coherent and incoherent oscillations. The recently-introduced notion of a weak chimera gives a rigorously testable characterization of chimera states for finite-dimensional phase oscillator networks. In this paper we give some persistence results for dynamically invariant sets under perturbations and apply them to coupled populations of phase oscillators with generalized coupling. In contrast to the weak chimeras with nonpositive maximal Lyapunov exponents constructed so far, we show that weak chimeras that are chaotic can exist in the limit of vanishing coupling between coupled populations of phase oscillators. We present numerical evidence that positive Lyapunov exponents can persist for a positive measure set of this inter-population coupling strength. 19. Tensile strength of restorative resins. PubMed Zidan, O; Asmussen, E; Jørgensen, K D 1980-06-01 The purpose of the present work was to measure the tensile strength of restorative resins and to study the effect of the method of measurement on the recorded results. A direct pull method using dumb-bell shaped specimens was used. The tensile strength of the resins was also tested using the diametral compression method suggested by the A.D.A. It was found that the method of testing affects the results. Although the diametral compression method is a simple method, it cannot be considered reliable for all types of material. The tensile strength of the conventional composites was significantly higher than the tensile strength of the microfilled composites. 20. Resolution of pronounced painless weakness arising from radiculopathy and disk extrusion. PubMed Lipetz, Jason S; Misra, Neelam; Silber, Jeff S 2005-07-01 In this retrospective, consecutive case series, we report the nonsurgical and rehabilitation outcomes of consecutive patients who presented with pronounced painless weakness arising from disk extrusion. Seven consecutive patients who chose physiatric care were followed clinically, and strength return was monitored. Each presented with predominantly painless radiculopathy, functionally significant strength loss, and radiographic evidence of disk extrusion or sequestration. Each patient participated in a targeted strengthening program, and in some cases, transforaminal injection therapy was employed. Each patient demonstrated an eventual full functional recovery. In most cases, electrodiagnostic studies were performed and included a needle examination of the affected limb and compound muscle action potentials from the most clinically relevant and weakened limb muscle. The electrodiagnostic findings and, in particular, the quantitative compound muscle action potential data seemed to correlate with the timing of motor recovery. Patients with predominantly painless and significant weakness arising from disk extrusion can demonstrate successful rehabilitation outcomes. Despite a relative absence of pain, such patients can present with a more rapidly reversible neurapraxic type of weakness. The more quantitative compound muscle action potential data obtained through electrodiagnostic studies may offer the treating physician an additional means of characterizing the type of neuronal injury at play and the likelihood and timing of strength return. PMID:15973090 1. Physiological Effects of Strength Training and Various Strength Training Devices. ERIC Educational Resources Information Center Wilmore, Jack H. Current knowledge in the area of muscle physiology is a basis for a discussion on strength training programs. It is now recognized that the expression of strength is related to, but not dependent upon, the size of the muscle and is probably more related to the ability to recruit more muscle fibers in the contraction, or to better synchronize their… 2. Autaptic pacemaker mediated propagation of weak rhythmic activity across small-world neuronal networks NASA Astrophysics Data System (ADS) Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž 2016-02-01 We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm. 3. Extraversion, neuroticism and strength of the nervous system. PubMed Frigon, J Y 1976-11-01 The hypothesized identity of the dimensions of extraversion-introversion and strength of the nervous system was tested on four groups of nine subjects (neurotic extraverts, stable extraverts, neurotic introverts, stable introverts). Strength of the subjects' nervous system was estimated using the electroencephalographic (EEG) variant of extinction with reinforcement. Introverted subjects were found to have weak nervous systems, according to the EEG index, while extraverted subjects had strong nervous systems, thus confirming the hypothesis. It was also found that the dimension of strength of the nervous system was unrelated to differences in neuroticism. The results are interpreted as adding support to Eysenck's theory relating differences in extraversion-introversion to differences in cortical arousal. 4. Bats Respond to Very Weak Magnetic Fields PubMed Central Tian, Lan-Xiang; Pan, Yong-Xin; Metzner, Walter; Zhang, Jin-Shuo; Zhang, Bing-Fang 2015-01-01 How animals, including mammals, can respond to and utilize the direction and intensity of the Earth’s magnetic field for orientation and navigation is contentious. In this study, we experimentally tested whether the Chinese Noctule, Nyctalus plancyi (Vespertilionidae) can sense magnetic field strengths that were even lower than those of the present-day geomagnetic field. Such field strengths occurred during geomagnetic excursions or polarity reversals and thus may have played an important role in the evolution of a magnetic sense. We found that in a present-day local geomagnetic field, the bats showed a clear preference for positioning themselves at the magnetic north. As the field intensity decreased to only 1/5th of the natural intensity (i.e., 10 μT; the lowest field strength tested here), the bats still responded by positioning themselves at the magnetic north. When the field polarity was artificially reversed, the bats still preferred the new magnetic north, even at the lowest field strength tested (10 μT), despite the fact that the artificial field orientation was opposite to the natural geomagnetic field (P<0.05). Hence, N. plancyi is able to detect the direction of a magnetic field even at 1/5th of the present-day field strength. This high sensitivity to magnetic fields may explain how magnetic orientation could have evolved in bats even as the Earth’s magnetic field strength varied and the polarity reversed tens of times over the past fifty million years. PMID:25922944 5. Causal conjunction fallacies: the roles of causal strength and mental resources. PubMed Crisp, Aimée Kay; Feeney, Aidan 2009-12-01 In two experiments we tested the prediction derived from Tversky and Kahneman's (1983) work on the causal conjunction fallacy that the strength of the causal connection between constituent events directly affects the magnitude of the causal conjunction fallacy. We also explored whether any effects of perceived causal strength were due to graded output from heuristic Type 1 reasoning processes or the result of analytic Type 2 reasoning processes. As predicted, Experiment 1 demonstrated that fallacy rates were higher for strongly than for weakly related conjunctions. Weakly related conjunctions in turn attracted higher rates of fallacious responding than did unrelated conjunctions. Experiment 2 showed that a concurrent memory load increased rates of fallacious responding for strongly related but not for weakly related conjunctions. We interpret these results as showing that manipulations of the strength of the perceived causal relationship between the conjuncts result in graded output from heuristic reasoning process and that additional mental resources are required to suppress strong heuristic output. 6. Frictional Strength of Hayward Fault Gouge NASA Astrophysics Data System (ADS) Morrow, C.; Moore, D.; Lockner, D. 2007-12-01 A recent 3-D geologic model of the Hayward fault in the San Francisco Bay Region shows that a number of different rock units are juxtaposed across the fault surface as a result of lateral displacement. The fault gouge formed therein is likely a mixture of these various rock types. To better model the mechanical behavior of the Hayward fault, which is known to both creep and have large earthquakes, frictional properties of mixtures of the principal rock types were determined in the laboratory. Room temperature triaxial shearing tests were conducted on binary and ternary mixtures of Great Valley Sequence graywacke, Franciscan jadeite-bearing metagraywacke, Franciscan pumpellyite-bearing metasandstone, Franciscan melange matrix, serpentinite and two-pyroxene gabbro. The gouge samples were crushed and sieved (<150 μm grains), then applied in a 1-mm layer between saw-cut sliding blocks. Each sample assemblage was saturated and sheared at constant pore water pressure of 1 MPa and normal stress of 51 MPa. Coefficients of friction, μ, ranged from a low of 0.38 for the serpentinite to a maximum of 0.85 for the gabbro. While the serpentinite and the Franciscan melange matrix were relatively weak, all other rock types obeyed Byerlee's Law. The friction coefficient of mixtures could be reliably predicted by a simple average based on dry weight percent of the end member strengths. This behavior is in contrast to some mixtures of common gouge materials such as montmorillonite+quartz, which exhibit non- linear frictional strength trends with varying weight percent of constituents. All materials tested except serpentinite were velocity strengthening, therefore promoting creeping behavior. The addition of serpentinite decreased a-b values of the gouge and increased the characteristic displacement, dc, of strength evolution. Because temperature strongly influences the mechanical properties of fault gouge as well as speeding chemical reactions between the constituents, elevated 7. The weak measurement process and the weak value of spin for metastable helium 23S1 NASA Astrophysics Data System (ADS) Monachello, Vincenzo; Barker, Peter; Flack, Robert; Hiley, Basil 2016-05-01 An experiment is being designed and constructed in order to measure the weak value of spin for an atomic system. The principle of the weak measurement'' process was first proposed by Aharonov, Albert and Vaidman, and describes a scenario in which a system is weakly coupled to a pointer between well-defined pre- and post-selected states. This experiment will utilise a pulsed supersonic beam of spin-1 metastable Helium (He*) atoms in the 23S1 state. The spin of the pre-selected He* atoms will be weakly coupled to its centre-of-mass. During its flight, the atomic beam will be prepared in a desired quantum state and travel through two inhomogeneous magnets (weak and strong) which both comprise the `weak measurement'' process. The deviation of the post-selected ms = + 1 state as measured using a micro-channel plate, phosphor screen and CCD camera setup will allow for the determination of the weak value of spin. This poster will report on the methods used and the experimental realisation. 8. Weak Gravitatational Lensing by Illustris-1 Galaxies NASA Astrophysics Data System (ADS) Brainerd, Tereasa G.; Koh, Patrick H. 2016-06-01 We compute the weak gravitational lensing signal of isolated, central galaxies obtained from the z=0.5 timestep of the ΛCDM Illustris-1 simulation. The galaxies have stellar masses ranging from 9.5 ≤ log10(M*/Msun) ≤ 11.0 and are located outside cluster and rich group environments. Although there is local substructure present in the form of small, luminous satellite galaxies, the central galaxies are the dominant objects within the virial radii (r200), and each central galaxy is at least 5 times brighter than any other luminous galaxy within the friends-of-friends halo. We compute the weak lensing signal within projected radii 0.05 < rp/r200 < 1.5 and investigate the degree to which the weak lensing signal is anisotropic. Since CDM halos are non-spherical, the weak lensing signal is expected to be anisotropic; however, the degree of anisotropy that is observed depends upon the symmetry axes that are used to define the geometry. The anisotropy is expected to be maximized when the major axis of the projected dark matter mass distribution is used to define the geomety. In practice in the observed universe, one must necessarily use the projected distribution of the luminous mass to define the geometry. If mass and light are not well-aligned, this results in a suppression of the weak lensing anistropy. Our initial analysis shows that the ellipticity of the projected dark matter halo is uncorrelated with the ellipticity of the projected stellar mass. That is ɛhalo ≠ f × ɛlight, where f is a constant multiplicative factor. In addition, in projection on the sky, the major axis of the dark matter mass is offset from that of the stellar mass by ˜40o on average. On scales rp ≤ 0.15 r200, the weak lensing anisotropy obtained when using the stellar mass to define the geometry is of order 7% and agrees well with the anisotropy obtained when using the dark matter mass to define the geometry. On scales rp ˜ r200, the anisotropy obtained when using the stellar mass to 9. Strength Training for Young Athletes. ERIC Educational Resources Information Center Kraemer, William J.; Fleck, Steven J. This guide is designed to serve as a resource for developing strength training programs for children. Chapter 1 uses research findings to explain why strength training is appropriate for children. Chapter 2 explains some of the important physiological concepts involved in children's growth and development as they apply to developing strength… 10. Hip abductor weakness is not the cause for iliotibial band syndrome. PubMed Grau, S; Krauss, I; Maiwald, C; Best, R; Horstmann, T 2008-07-01 Muscular deficits in the hip abductors are presumed to be a major factor in the development of Iliotibial Band Syndrome in runners. No definite relationship between muscular weakness of the hip abductors and the development of Iliotibial Band Syndrome or different ratios between hip adduction to abduction have been reported so far. Isokinetic measurements were taken from 10 healthy runners and 10 runners with Iliotibial Band Syndrome. Primary outcome variables were concentric, eccentric, and isometric peak torque of the hip abductors and adductors at 30 degrees/s, and a concentric endurance quotient at the same angle velocity. Differences in muscle strength of the hip abductors between healthy (CO) and injured runners (ITBS) were not statistically significant in any of the muscle functions tested. Both groups showed the same strength differences between hip adduction and abduction, and increased strength in hip adduction. Weakness of hip abductors does not seem to play a role in the etiology of Iliotibial Band Syndrome in runners, since dynamic and static strength measurements did not differ between groups, and differences between hip abduction and adduction were the same. Strengthening of hip abductors seems to have little effect on the prevention of Iliotibial Band Syndrome in runners. 11. Strength properties of fly ash based controlled low strength materials. PubMed Türkel, S 2007-08-25 Controlled low strength material (CLSM) is a flowable mixture that can be used as a backfill material in place of compacted soils. Flowable fill requires no tamping or compaction to achieve its strength and typically has a load carrying capacity much higher than compacted soils, but it can still be excavated easily. The selection of CLSM type should be based on technical and economical considerations for specific applications. In this study, a mixture of high volume fly ash (FA), crushed limestone powder (filler) and a low percentage of pozzolana cement have been tried in different compositions. The amount of pozzolana cement was kept constant for all mixes as, 5% of fly ash weight. The amount of mixing water was chosen in order to provide optimum pumpability by determining the spreading ratio of CLSM mixtures using flow table method. The shear strength of the material is a measure of the materials ability to support imposed stresses on the material. The shear strength properties of CLSM mixtures have been investigated by a series of laboratory tests. The direct shear test procedure was applied for determining the strength parameters Phi (angle of shearing resistance) and C(h) (cohesion intercept) of the material. The test results indicated that CLSM mixtures have superior shear strength properties compared to compacted soils. Shear strength, cohesion intercept and angle of shearing resistance values of CLSM mixtures exceeded conventional soil materials' similar properties at 7 days. These parameters proved that CLSM mixtures are suitable materials for backfill applications. 12. Weakly nonlinear hydrodynamic instabilities in inertial fusion SciTech Connect Haan, S.W. ) 1991-08-01 For many cases of interest to inertial fusion, growth of Rayleigh--Taylor and other hydrodynamic instabilities is such that the perturbations remain linear or weakly nonlinear. The transition to nonlinearity is studied via a second-order solution for multimode classical Rayleigh--Taylor growth. The second-order solution shows how classical Rayleigh--Taylor systems forget initial amplitude information in the weakly nonlinear phase. Stabilized growth relevant to inertial fusion is qualitatively different, and initial amplitudes are not dominated by nonlinear effects. In all systems with a full spectrum of modes, nonlinear effects begin when mode amplitudes reach about 1/{ital Lk}{sup 2}, for modes of wave number {ital k} and system size {ital L}. 13. Weak Energy Condition Violation and Superluminal Travel NASA Astrophysics Data System (ADS) Lobo, Francisco; Crawford, Paulo Recent solutions to the Einstein Field Equations involving negative energy densities, i.e., matter violating the weak-energy-condition, have been obtained, namely traversable wormholes, the Alcubierre warp drive and the Krasnikov tube. These solutions are related to superluminal travel, although locally the speed of light is not surpassed. It is difficult to define faster-than-light travel in generic space-times, and one can construct metrics which apparently allow superluminal travel, but are in fact flat Minkowski space-times. Therefore, to avoid these difficulties it is important to provide an appropriate definition of superluminal travel.We investigate these problems and the relationship between weak-energy-condition violation and superluminal travel. 14. Crystallographic Phasing from Weak Anomalous Signals PubMed Central Liu, Qun; Hendrickson, Wayne A. 2015-01-01 The exploitation of anomalous signals for biological structural solution is maturing. Single-wavelength anomalous diffraction (SAD) is dominant in de novo structure analysis. Nevertheless, for challenging structures where the resolution is low (dmin ≥ 3.5 Å) or where only lighter atoms (Z ≤ 20) are present, as for native macromolecules, solved SAD structures are still scarce. With the recent rapid development in crystal handling, beamline instrumentation, optimization of data collection strategies, use of multiple crystals and structure determination technologies, the weak anomalous diffraction signals are now robustly measured and should be used for routine SAD structure determination. The review covers these recent advances on weak anomalous signals measurement, analysis and utilization. PMID:26432413 15. Amplification effects in optomechanics via weak measurements NASA Astrophysics Data System (ADS) Li, Gang; Wang, Tao; Song, He-Shan 2014-07-01 We revisit the scheme of single-photon weak-coupling optomechanics using postselection, proposed by Pepper, Ghobadi, Jeffrey, Simon, and Bouwmeester [Phys. Rev. Lett. 109, 023601 (2012), 10.1103/PhysRevLett.109.023601], by analyzing the exact solution of the dynamical evolution. Positive and negative amplification effects of the displacement of the mirror's position can be generated when the Kerr phase is considered. This effect occurs when the postselected state of the photon is orthogonal to the initial state, which cannot be explained by the usual weak measurement results. The amplification effect can be further modulated by a phase shifter, and the maximal displacement state can appear within a short evolution time. 16. Weak cosmic censorship: as strong as ever. PubMed Hod, Shahar 2008-03-28 Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments. 17. Shock Wave Dynamics in Weakly Ionized Plasmas NASA Technical Reports Server (NTRS) Johnson, Joseph A., III 1999-01-01 An investigation of the dynamics of shock waves in weakly ionized argon plasmas has been performed using a pressure ruptured shock tube. The velocity of the shock is observed to increase when the shock traverses the plasma. The observed increases cannot be accounted for by thermal effects alone. Possible mechanisms that could explain the anomalous behavior include a vibrational/translational relaxation in the nonequilibrium plasma, electron diffusion across the shock front resulting from high electron mobility, and the propagation of ion-acoustic waves generated at the shock front. Using a turbulence model based on reduced kinetic theory, analysis of the observed results suggest a role for turbulence in anomalous shock dynamics in weakly ionized media and plasma-induced hypersonic drag reduction. 18. Weak cosmic censorship: as strong as ever. PubMed Hod, Shahar 2008-03-28 Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments. PMID:18517851 19. Security Weaknesses in Arbitrated Quantum Signature Protocols NASA Astrophysics Data System (ADS) Liu, Feng; Zhang, Kejia; Cao, Tianqing 2014-01-01 Arbitrated quantum signature (AQS) is a cryptographic scenario in which the sender (signer), Alice, generates the signature of a message and then a receiver (verifier), Bob, can verify the signature with the help of a trusted arbitrator, Trent. In this paper, we point out there exist some security weaknesses in two AQS protocols. Our analysis shows Alice can successfully disavow any of her signatures by a simple attack in the first protocol. Furthermore, we study the security weaknesses of the second protocol from the aspects of forgery and disavowal. Some potential improvements of this kind of protocols are given. We also design a new method to authenticate a signature or a message, which makes AQS protocols immune to Alice's disavowal attack and Bob's forgery attack effectively. 20. An algorithm for multivariate weak stochastic dominance SciTech Connect Mosler, K. 1994-12-31 The talk addresses the computational problem of comparing two given probability distributions in n-space with respect to several stochastic orderings. The orderings investigated are weak first degree stochastic dominance, weak second degree stochastic dominance, and their dual ordering relations. For each of the four dominance relations we present conditions which are necessary and sufficient for dominance of F over G when F and G have finite support in n-space. An algorithm is proposed which operates efficiently on the join-semilattice generated by their joint support. If F and G are empirical distribution functions, and {anti F} and {anti G}denote the underlying probability laws, significance tests can be performed on {anti F} = {anti G} against the alternative that {anti F} {ne} {anti G} and {anti F} dominates {anti G} in one of the four orderings. Other applications are found in decision theory, applied probability, operations research, and economics. 1. Weak Cosmic Censorship: As Strong as Ever NASA Astrophysics Data System (ADS) Hod, Shahar 2008-03-01 Spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This is the essence of the weak cosmic censorship conjecture. The hypothesis, put forward by Penrose 40 years ago, is still one of the most important open questions in general relativity. In this Letter, we reanalyze extreme situations which have been considered as counterexamples to the weak cosmic censorship conjecture. In particular, we consider the absorption of scalar particles with large angular momentum by a black hole. Ignoring back reaction effects may lead one to conclude that the incident wave may overspin the black hole, thereby exposing its inner singularity to distant observers. However, we show that when back reaction effects are properly taken into account, the stability of the black-hole event horizon is irrefutable. We therefore conclude that cosmic censorship is actually respected in this type of gedanken experiments. 2. Towards weakly constrained double field theory NASA Astrophysics Data System (ADS) Lee, Kanghoon 2016-08-01 We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation. 3. Disordered weak and strong topological insulators. PubMed Kobayashi, Koji; Ohtsuki, Tomi; Imura, Ken-Ichiro 2013-06-01 A global phase diagram of disordered weak and strong topological insulators is established numerically. As expected, the location of the phase boundaries is renormalized by disorder, a feature recognized in the study of the so-called topological Anderson insulator. Here, we report unexpected quantization, i.e., robustness against disorder of the conductance peaks on these phase boundaries. Another highlight of the work is on the emergence of two subregions in the weak topological insulator phase under disorder. According to the size dependence of the conductance, the surface states are either robust or "defeated" in the two subregions. The nature of the two distinct types of behavior is further revealed by studying the Lyapunov exponents. 4. From weak discontinuities to nondissipative shock waves SciTech Connect Garifullin, R. N. Suleimanov, B. I. 2010-01-15 An analysis is presented of the effect of weak dispersion on transitions from weak to strong discontinuities in inviscid fluid dynamics. In the neighborhoods of transition points, this effect is described by simultaneous solutions to the Korteweg-de Vries equation u{sub t}'+ uu{sub x}' + u{sub xxx}' = 0 and fifth-order nonautonomous ordinary differential equations. As x{sup 2} + t{sup 2} {yields}{infinity}, the asymptotic behavior of these simultaneous solutions in the zone of undamped oscillations is given by quasi-simple wave solutions to Whitham equations of the form r{sub i}(t, x) = tl{sub i} x/t{sup 2}. 5. Mutual synchronization of weakly coupled gyrotrons SciTech Connect Rozental, R. M.; Glyavin, M. Yu.; Sergeev, A. S.; Zotova, I. V.; Ginzburg, N. S. 2015-09-15 The processes of synchronization of two weakly coupled gyrotrons are studied within the framework of non-stationary equations with non-fixed longitudinal field structure. With the allowance for a small difference of the free oscillation frequencies of the gyrotrons, we found a certain range of parameters where mutual synchronization is possible while a high electronic efficiency is remained. It is also shown that synchronization regimes can be realized even under random fluctuations of the parameters of the electron beams. 6. Acute neuromuscular weakness associated with dengue infection PubMed Central Hira, Harmanjit Singh; Kaur, Amandeep; Shukla, Anuj 2012-01-01 Background: Dengue infections may present with neurological complications. Whether these are due to neuromuscular disease or electrolyte imbalance is unclear. Materials and Methods: Eighty-eight patients of dengue fever required hospitalization during epidemic in year 2010. Twelve of them presented with acute neuromuscular weakness. We enrolled them for study. Diagnosis of dengue infection based on clinical profile of patients, positive serum IgM ELISA, NS1 antigen, and sero-typing. Complete hemogram, kidney and liver functions, serum electrolytes, and creatine phosphokinase (CPK) were tested. In addition, two patients underwent nerve conduction velocity (NCV) test and electromyography. Results: Twelve patients were included in the present study. Their age was between 18 and 34 years. Fever, myalgia, and motor weakness of limbs were most common presenting symptoms. Motor weakness developed on 2nd to 4th day of illness in 11 of 12 patients. In one patient, it developed on 10th day of illness. Ten of 12 showed hypokalemia. One was of Guillain-Barré syndrome and other suffered from myositis; they underwent NCV and electromyography. Serum CPK and SGOT raised in 8 out of 12 patients. CPK of patient of myositis was 5098 IU. All of 12 patients had thrombocytopenia. WBC was in normal range. Dengue virus was isolated in three patients, and it was of serotype 1. CSF was normal in all. Within 24 hours, those with hypokalemia recovered by potassium correction. Conclusions: It was concluded that the dengue virus infection led to acute neuromuscular weakness because of hypokalemia, myositis, and Guillain-Barré syndrome. It was suggested to look for presence of hypokalemia in such patients. PMID:22346188 7. Heat capacity in weakly correlated liquids SciTech Connect Khrustalyov, Yu. V.; Vaulina, O. S.; Koss, X. G. 2012-12-15 Previously unavailable numerical data related to the heat capacity in two- and three-dimensional liquid Yukawa systems are obtained by means of fluctuation theory. The relations between thermal conductivity and diffusion constants are numerically studied and discussed. New approximation for heat capacity dependence on non-ideality parameter for weakly correlated systems of particles is proposed. Comparison of the obtained results to the existing theoretical and numerical data is discussed. 8. Exclusive processes in strong and weak interactions SciTech Connect Tsokos, K. 1983-01-01 Evolution equations for flavor singlet mesons are derived; these are solved in terms of Gegenbauer polynomials. These results have been applied to exclusive processes at large momentum transfer which involve flavor singlet mesons and glueballs - two glueball decays of the Upsilon, radiative psi decays and two-photon processes involving the eta. Exclusive, weak decays of heavy mesons are examined and the heavy mass scaling behavior of decay rates is obtained. 9. Weakly bound states in heterogeneous waveguides NASA Astrophysics Data System (ADS) Amore, Paolo; Fernández, Francisco M.; Hofmann, Christoph P. 2016-07-01 We study the spectrum of the Helmholtz equation in a two-dimensional infinite waveguide, containing a weak heterogeneity localized at an internal point, and obeying Dirichlet boundary conditions at its border. We use the variational theorem to derive the condition for which the lowest eigenvalue of the spectrum falls below the continuum threshold and a bound state appears, localized at the heterogeneity. We devise a rigorous perturbation scheme and derive the exact expression for the energy to third order in the heterogeneity. 10. Measuring neutrino masses with weak lensing SciTech Connect Wong, Yvonne Y. Y. 2006-11-17 Weak gravitational lensing of distant galaxies by large scale structure (LSS) provides an unbiased way to map the matter distribution in the low redshift universe. This technique, based on the measurement of small distortions in the images of the source galaxies induced by the intervening LSS, is expected to become a key cosmological probe in the future. We discuss how future lensing surveys can probe the sum of the neutrino masses at the 0 05 eV level. 11. [Weak signal detection in every heart cycle]. PubMed Yuan, J; Xu, X; Gao, D; Shi, G 2001-12-01 In this article, a new approach is introduced to lowering the myo-electronic noise in weak ECG signals. We use artificial neural network to make the noise be white, and then we adopt an adaptive filter of which the reference signal is achieved by extracting from other ECG cycle. The outcome is the reduction of both white noise and non-white noise in ECG signal. Satisfactory results have been achieved by using this method in the experiment of late potential detection. 12. Longitudinal assessment of grip strength using bulb dynamometer in Duchenne Muscular Dystrophy PubMed Central Pizzato, Tatiana M.; Baptista, Cyntia R. J. A.; Souza, Mariana A.; Benedicto, Michelle M. B.; Martinez, Edson Z.; Mattiello-Sverzut, Ana C. 2014-01-01 BACKGROUND: Grip strength is used to infer functional status in several pathological conditions, and the hand dynamometer has been used to estimate performance in other areas. However, this relationship is controversial in neuromuscular diseases and studies with the bulb dynamometer comparing healthy children and children with Duchenne Muscular Dystrophy (DMD) are limited. OBJECTIVE: The evolution of grip strength and the magnitude of weakness were examined in boys with DMD compared to healthy boys. The functional data of the DMD boys were correlated with grip strength. METHOD: Grip strength was recorded in 18 ambulant boys with DMD (Duchenne Group, DG) aged 4 to 13 years (mean 7.4±2.1) and 150 healthy volunteers (Control Group, CG) age-matched using a bulb dynamometer (North Coast- NC70154). The follow-up of the DG was 6 to 33 months (3-12 sessions), and functional performance was verified using the Vignos scale. RESULTS: There was no difference between grip strength obtained by the dominant and non-dominant side for both groups. Grip strength increased in the CG with chronological age while the DG remained stable or decreased. The comparison between groups showed significant difference in grip strength, with CG values higher than DG values (confidence interval of 95%). In summary, there was an increment in the differences between the groups with increasing age. Participants with 24 months or more of follow-up showed a progression of weakness as well as maintained Vignos scores. CONCLUSIONS: The amplitude of weakness increased with age in the DG. The bulb dynamometer detected the progression of muscular weakness. Functional performance remained virtually unchanged in spite of the increase in weakness. PMID:25003277 13. Ultra-weak photon emission of hands in aging prediction. PubMed Zhao, Xin; van Wijk, Eduard; Yan, Yu; van Wijk, Roeland; Yang, Huanming; Zhang, Yan; Wang, Jian 2016-09-01 Aging has been one of the several topics intensely investigated during recent decades. More scientists have been scrutinizing mechanisms behind the human aging process. Ultra-weak photon emission is known as one type of spontaneous photon emission that can be detected with a highly sensitive single photon counting photomultiplier tube (PMT) from the surface of human bodies. It may reflect the body's oxidative damage. Our aim was to examine whether ultra-weak photon emission from a human hand is able to predict one's chronological age. Sixty subjects were recruited and grouped by age. We examined four areas of each hand: palm side of fingers, palm side of hand, dorsum side of fingers, and dorsum side of hand. Left and right hand were measured synchronously with two independent PMTs. Mean strength and Fano factor values of photon counts were utilized to compare the UPE patterns of males and females of different age groups. Subsequently, we utilized UPE data from the most sensitive PMT to develop an age prediction model. We randomly picked 49 subjects to construct the model, whereas the remaining 11 subjects were utilized for validation. The results demonstrated that the model was a good regression compared to the observed values (Pearson's r=0.6, adjusted R square=0.4, p=9.4E-7, accuracy=49/60). Further analysis revealed that the average difference between the chronological age and predicted age was only 7.6±0.8years. It was concluded that this fast and non-invasive photon technology is sufficiently promising to be developed for the estimation of biological aging. PMID:27472904 14. How we built a strong company in a weak industry. PubMed Brown, R 2001-02-01 When Roger Brown and Linda Mason decided to start a child care and early-education company 15 years ago, they knew about the challenges inherent in the industry: no barriers to entry, low margins, few economies of scale, heavy regulatory oversight--to name just a few. But that didn't stop them. They eventually built Bright Horizons Family Solutions, a company that now has more than 340 high-quality child care centers, serving 40,000 children and employing 12,000 people. How did they do it? Sheer determination helped. But even more important, they developed a business model that took advantage of industry weaknesses. When the couple sat down to hash out a plan for the company, they realized that the key to achieving profitability and creating barriers to entry was to partner with companies. They could achieve higher returns by having those companies build and outfit the centers and, at the same time, boost customer loyalty. Indeed, Bright Horizon's corporate clients came to see the state-of-the-art centers as a way to distinguish themselves in the eyes of current and prospective employees. The high-quality child care attracted the best employees and raised retention rates. Brown's first-person account describes the difficulties the couple and their company faced along the way, including the struggle for funding and a board that questioned Bright Horizons' business model and basic philosophy of good child care. But, Brown says, the commitment to a singular business model and the determination to make strengths out of weaknesses made the impossible possible. 15. Ultra-weak photon emission of hands in aging prediction. PubMed Zhao, Xin; van Wijk, Eduard; Yan, Yu; van Wijk, Roeland; Yang, Huanming; Zhang, Yan; Wang, Jian 2016-09-01 Aging has been one of the several topics intensely investigated during recent decades. More scientists have been scrutinizing mechanisms behind the human aging process. Ultra-weak photon emission is known as one type of spontaneous photon emission that can be detected with a highly sensitive single photon counting photomultiplier tube (PMT) from the surface of human bodies. It may reflect the body's oxidative damage. Our aim was to examine whether ultra-weak photon emission from a human hand is able to predict one's chronological age. Sixty subjects were recruited and grouped by age. We examined four areas of each hand: palm side of fingers, palm side of hand, dorsum side of fingers, and dorsum side of hand. Left and right hand were measured synchronously with two independent PMTs. Mean strength and Fano factor values of photon counts were utilized to compare the UPE patterns of males and females of different age groups. Subsequently, we utilized UPE data from the most sensitive PMT to develop an age prediction model. We randomly picked 49 subjects to construct the model, whereas the remaining 11 subjects were utilized for validation. The results demonstrated that the model was a good regression compared to the observed values (Pearson's r=0.6, adjusted R square=0.4, p=9.4E-7, accuracy=49/60). Further analysis revealed that the average difference between the chronological age and predicted age was only 7.6±0.8years. It was concluded that this fast and non-invasive photon technology is sufficiently promising to be developed for the estimation of biological aging. 16. Weak measurement and Bohmian conditional wave functions SciTech Connect Norsen, Travis; Struyve, Ward 2014-11-15 It was recently pointed out and demonstrated experimentally by Lundeen et al. that the wave function of a particle (more precisely, the wave function possessed by each member of an ensemble of identically-prepared particles) can be “directly measured” using weak measurement. Here it is shown that if this same technique is applied, with appropriate post-selection, to one particle from a perhaps entangled multi-particle system, the result is precisely the so-called “conditional wave function” of Bohmian mechanics. Thus, a plausibly operationalist method for defining the wave function of a quantum mechanical sub-system corresponds to the natural definition of a sub-system wave function which Bohmian mechanics uniquely makes possible. Similarly, a weak-measurement-based procedure for directly measuring a sub-system’s density matrix should yield, under appropriate circumstances, the Bohmian “conditional density matrix” as opposed to the standard reduced density matrix. Experimental arrangements to demonstrate this behavior–and also thereby reveal the non-local dependence of sub-system state functions on distant interventions–are suggested and discussed. - Highlights: • We study a “direct measurement” protocol for wave functions and density matrices. • Weakly measured states of entangled particles correspond to Bohmian conditional states. • Novel method of observing quantum non-locality is proposed. 17. Skeletal muscle weakness in osteogeneis imperfecta mice PubMed Central Gentry, Bettina A; Ferreira, J. Andries; McCambridge, Amanda J.; Brown, Marybeth; Phillips, Charlotte L. 2010-01-01 Exercise intolerance, muscle fatigue and weakness are often-reported, little-investigated concerns of patients with osteogenesis imperfecta (OI). OI is a heritable connective tissue disorder hallmarked by bone fragility resulting primarily from dominant mutations in the proα1(I) or proα2(I) collagen genes and the recently discovered recessive mutations in post-translational modifying proteins of type I collagen. In this study we examined the soleus (S), plantaris (P), gastrocnemius (G), tibialis anterior (TA) and quadriceps (Q) muscles of mice expressing mild (+/oim) and moderately severe (oim/oim) OI for evidence of inherent muscle pathology. In particular, muscle weight, fiber cross-sectional area (CSA), fiber type, fiber histomorphology, fibrillar collagen content, absolute, relative and specific peak tetanic force (Po, Po/mg and Po/CSA respectively) of individual muscles were evaluated. Oim/oim mouse muscles were generally smaller, contained less fibrillar collagen, had decreased Po and an inability to sustain Po for the 300 ms testing duration for specific muscles; +/oim mice had a similar but milder skeletal muscle phenotype. +/oim mice had mild weakness of specific muscles but were less affected than their oim/oim counterparts which demonstrated readily apparent skeletal muscle pathology. Therefore muscle weakness in oim mice reflects inherent skeletal muscle pathology. PMID:20619344 18. Weak homological dimensions and biflat Koethe algebras SciTech Connect Pirkovskii, A Yu 2008-06-30 The homological properties of metrizable Koethe algebras {lambda}(P) are studied. A criterion for an algebra A={lambda}(P) to be biflat in terms of the Koethe set P is obtained, which implies, in particular, that for such algebras the properties of being biprojective, biflat, and flat on the left are equivalent to the surjectivity of the multiplication operator A otimes-hat A{yields}A. The weak homological dimensions (the weak global dimension w.dg and the weak bidimension w.db) of biflat Koethe algebras are calculated. Namely, it is shown that the conditions w.db {lambda}(P)<=1 and w.dg {lambda}(P)<=1 are equivalent to the nuclearity of {lambda}(P); and if {lambda}(P) is non-nuclear, then w.dg {lambda}(P)=w.db {lambda}(P)=2. It is established that the nuclearity of a biflat Koethe algebra {lambda}(P), under certain additional conditions on the Koethe set P, implies the stronger estimate db {lambda}(P), where db is the (projective) bidimension. On the other hand, an example is constructed of a nuclear biflat Koethe algebra {lambda}(P) such that db {lambda}(P)=2 (while w.db {lambda}(P)=1). Finally, it is shown that many biflat Koethe algebras, while not being amenable, have trivial Hochschild homology groups in positive degrees (with arbitrary coefficients). Bibliography: 37 titles. 19. Francium Spectroscopy for Weak Interaction Studies NASA Astrophysics Data System (ADS) Orozco, Luis 2014-05-01 Francium, a radioactive element, is the heaviest alkali. Its atomic and nuclear structure makes it an ideal laboratory to study the weak interaction. Laser trapping and cooling in-line with the superconducting LINAC accelerator at Stony Brook opened the precision study of its atomic structure. I will present our proposal and progress towards weak interaction measurements at TRIUMF, the National Canadian Accelerator in Vancouver. These include the commissioning run of the Francium Trapping Facility, hyperfine anomaly measurements on a chain of Fr isotopes, the nuclear anapole moment through parity non-conserving transitions in the ground state hyperfine manifold. These measurements should shed light on the nucleon-nucleon weak interaction. This work is done by the FrPNC collaboration: S. Aubin College of William and Mary, J. A. Behr TRIUMF, R. Collister U. Manitoba, E. Gomez UASLP, G. Gwinner U. Manitoba, M. R. Pearson TRIUMF, L. A. Orozco UMD, M. Tandecki TRIUMF, J. Zhang UMD Supported by NSF and DOE from the USA; TRIUMF, NRC and NSERC from Canada; and CONACYT from Mexico 20. Age-related weakness of proximal muscle studied with motor cortical mapping: a TMS study. PubMed Plow, Ela B; Varnerin, Nicole; Cunningham, David A; Janini, Daniel; Bonnett, Corin; Wyant, Alexandria; Hou, Juliet; Siemionow, Vlodek; Wang, Xiao-Feng; Machado, Andre G; Yue, Guang H 2014-01-01 Aging-related weakness is due in part to degeneration within the central nervous system. However, it is unknown how changes to the representation of corticospinal output in the primary motor cortex (M1) relate to such weakness. Transcranial magnetic stimulation (TMS) is a noninvasive method of cortical stimulation that can map representation of corticospinal output devoted to a muscle. Using TMS, we examined age-related alterations in maps devoted to biceps brachii muscle to determine whether they predicted its age-induced weakness. Forty-seven right-handed subjects participated: 20 young (22.6 ± 0.90 years) and 27 old (74.96 ± 1.35 years). We measured strength as force of elbow flexion and electromyographic activation of biceps brachii during maximum voluntary contraction. Mapping variables included: 1) center of gravity or weighted mean location of corticospinal output, 2) size of map, 3) volume or excitation of corticospinal output, and 4) response density or corticospinal excitation per unit area. Center of gravity was more anterior in old than in young (p<0.001), though there was no significant difference in strength between the age groups. Map size, volume, and response density showed no significant difference between groups. Regardless of age, center of gravity significantly predicted strength (β = -0.34, p = 0.005), while volume adjacent to the core of map predicted voluntary activation of biceps (β = 0.32, p = 0.008). Overall, the anterior shift of the map in older adults may reflect an adaptive change that allowed for the maintenance of strength. Laterally located center of gravity and higher excitation in the region adjacent to the core in weaker individuals could reflect compensatory recruitment of synergistic muscles. Thus, our study substantiates the role of M1 in adapting to aging-related weakness and subtending strength and muscle activation across age groups. Mapping from M1 may offer foundation for an examination of mechanisms that preserve 1. Oscillator strengths and collision strengths for S v NASA Technical Reports Server (NTRS) Van Wyngaarden, W. L.; Henry, R. J. W. 1981-01-01 Observations of the optical extreme-ultraviolet spectrum of the Jupiter planetary system during the Voyager space mission revealed bright emission lines of some sulfur ions. The spectra of the torus at the orbit of Io are likely to contain S V lines. The described investigation provides oscillator strengths and collision strengths for the first four UV lines. The collision strengths from the ground state to four other excited states are also obtained. Use is made of a two-state calculation which is checked for convergence for some transitions by employing a three-state or a four-state approximation. Target wave functions for S V are calculated so that the oscillator strengths calculated in dipole length and dipole velocity approximations agree within 5%. 2. The role of weak hydrogen bonds in chiral recognition. PubMed Scuderi, Debora; Le Barbu-Debus, Katia; Zehnacker, A 2011-10-28 Chiral recognition has been studied in neutral or ionic weakly bound complexes isolated in the gas phase by combining laser spectroscopy and quantum chemical calculations. Neutral complexes of the two enantiomers of lactic ester derivatives with chiral chromophores have been formed in a supersonic expansion. Their structure has been elucidated by means of IR-UV double resonance spectroscopy in the 3 μm region. In both systems described here, the main interaction ensuring the cohesion of the complex is a strong hydrogen bond between the chromophore and methyl-lactate. However, an additional hydrogen bond of much weaker strength plays a discriminative role between the two enantiomers. For example, the 1:1 heterochiral complex between R-(+)-2-naphthyl-ethanol and S-(+) methyl-lactate is observed, in contrast with the 1:1 homochiral complex which lacks this additional hydrogen bond. On the other hand, the same kind of insertion structures is formed for the complex between S-(±)-cis-1-amino-indan-2-ol and the two enantiomers of methyl-lactate, but an additional addition complex is formed for R-methyl-lactate only. This selectivity rests on the formation of a weak CHπ interaction which is not possible for the other enantiomer. The protonated dimers of Cinchona alkaloids, namely quinine, quinidine, cinchonine and cinchonidine, have been isolated in an ion trap and studied by IRMPD spectroscopy in the region of the ν(OH) and ν(NH) stretch modes. The protonation site is located on the alkaloid nitrogen which acts as a strong hydrogen bond donor in all the dimers studied. While the nature of the intermolecular hydrogen bond is similar in the homochiral and heterochiral complexes, the heterochiral complex displays an additional weak CHO hydrogen bond located on its neutral part, which results in slightly different spectroscopic fingerprints in the ν(OH) stretch region. This first spectroscopic evidence of chiral recognition in protonated dimers opens the way to the 3. Strength Measurements in Acute Hamstring Injuries: Intertester Reliability and Prognostic Value of Handheld Dynamometry. PubMed Reurink, Gustaaf; Goudswaard, Gert Jan; Moen, Maarten H; Tol, Johannes L; Verhaar, Jan A N; Weir, Adam 2016-08-01 Study Design Cohort study, repeated measures. Background Although hamstring strength measurements are used for assessing prognosis and monitoring recovery after hamstring injury, their actual clinical relevance has not been established. Handheld dynamometry (HHD) is a commonly used method of measuring muscle strength. The reliability of HHD has not been determined in athletes with acute hamstring injuries. Objectives To determine the intertester reliability and the prognostic value of hamstring HHD strength measurement in acute hamstring injuries. Methods We measured knee flexion strength with HHD in 75 athletes at 2 visits, at baseline (within 5 days of hamstring injury) and follow-up (5 to 7 days after the baseline measurement). We assessed isometric hamstring strength in 15° and 90° of knee flexion. Reliability analysis testing was performed by 2 testers independently at the follow-up visit. We recorded the time needed to return to play (RTP) up to 6 months following baseline. Results The intraclass correlation coefficients of the strength measurements in injured hamstrings were between 0.75 and 0.83. There was a statistically significant but weak correlation between the time to RTP and the strength deficit at 15° of knee flexion measured at baseline (Spearman r = 0.25, P = .045) and at the follow-up visit (Spearman r = 0.26, P = .034). Up to 7% of the variance in time to RTP is explained by this strength deficit. None of the other strength variables were significantly correlated with time to RTP. Conclusion Hamstring strength can be reliably measured with HHD in athletes with acute hamstring injuries. The prognostic value of strength measurements is limited, as there is only a weak association between the time to RTP and hamstring strength deficit after acute injury. Level of Evidence Prognosis, level 4. J Orthop Sports Phys Ther 2016;46(8):689-696. Epub 12 May 2016. doi:10.2519/jospt.2016.6363. 4. Standard systems for measurement of pKs and ionic mobilities. 1. Univalent weak acids. PubMed Slampová, Andrea; Krivánková, Ludmila; Gebauer, Petr; Bocek, Petr 2008-12-01 Determination of pK values of weak bases and acids by CZE has already attracted big attention in current practice and proved to offer the advantage of being applicable for mixtures of analytes. The method is based on the measurement of mobility curves plotting the effective mobility vs. the pH of the background electrolyte, and following computer-assisted regression involving corrections for ionic strength and temperature. To cover the necessary range of pH for a given case, both buffering weak acids and bases are used in one set of measurements, which requires implementing computations of individual ionic strength corrections for each pH value. It is also well known that some components of frequently used background electrolytes may interact with the analytes measured, on forming associates or complexes. This obviously deteriorates the reliability of the resulting data. This contribution brings a rational approach to this problem and establishes a standard system of anionic buffers for measurements of pKs and mobilities of weak acids, where the only counter cation present (besides H(+)) is Na(+). In this way, the risk of formation of complexes or associates of analytes with counter ions is strongly reduced. Moreover, the standard system of anionic buffers is selected in such a way that it provides, for an entire set of measurements, constant and accurately known ionic strength and the operational conditions are selected so that they provide constant Joule heating. Due to these precautions only one correction for ionic strength and temperature is needed for the obtained set of experimental data. This considerably facilitates their evaluation and regression analysis as the corrections need not be implemented in the computation software. The reliability and the advantages of the proposed system are well documented by experiments, where the known problematic group of phenol derivatives was measured with high accuracy and without any notice of anomalous behaviour. PMID 5. Measurement of muscle strength in the intensive care unit. PubMed Bittner, Edward A; Martyn, Jeevendra A; George, Edward; Frontera, Walter R; Eikermann, Matthias 2009-10-01 Traditional (indirect) techniques, such as electromyography and nerve conduction velocity measurement, do not reliably predict intensive care unit-acquired muscle weakness and its clinical consequences. Therefore, quantitative assessment of skeletal muscle force is important for diagnosis of intensive care unit-acquired motor dysfunction. There are a number of ways for assessing objectively muscle strength, which can be categorized as techniques that quantify maximum voluntary contraction force and those that assess evoked (stimulated) muscle force. Important factors that limit the repetitive evaluation of maximum voluntary contraction force in intensive care unit patients are learning effects, pain during muscular contraction, and alteration of consciousness.The selection of the appropriate muscle is crucial for making adequate predictions of a patient's outcome. The upper airway dilators are much more susceptible to a decrease in muscle strength than the diaphragm, and impairment of upper airway patency is a key mechanism of extubation failure in intensive care unit patients. Data suggest that the adductor pollicis muscle is an appropriate reference muscle to predict weakness of muscles that are typically affected by intensive care unit-acquired weakness, i.e., upper airway as well as extremity muscles. Stimulated (evoked) force of skeletal muscles, such as the adductor pollicis, can be assessed repetitively, independent of brain function, even in heavily sedated patients during high acuity of their disease. PMID:20046117 6. The Tensile Behavior of High-Strength Carbon Fibers. PubMed Langston, Tye 2016-08-01 Carbon fibers exhibit exceptional properties such as high stiffness and specific strength, making them excellent reinforcements for composite materials. However, it is difficult to directly measure their tensile properties and estimates are often obtained by tensioning fiber bundles or composites. While these macro scale tests are informative for composite design, their results differ from that of direct testing of individual fibers. Furthermore, carbon filament strength also depends on other variables, including the test length, actual fiber diameter, and material flaw distribution. Single fiber tensile testing was performed on high-strength carbon fibers to determine the load and strain at failure. Scanning electron microscopy was also conducted to evaluate the fiber surface morphology and precisely measure each fiber's diameter. Fiber strength was found to depend on the test gage length and in an effort to better understand the overall expected performance of these fibers at various lengths, statistical weak link scaling was performed. In addition, the true Young's modulus was also determined by taking the system compliance into account. It was found that all properties (tensile strength, strain to failure, and Young's modulus) matched very well with the manufacturers' reported values at 20 mm gage lengths, but deviated significantly at other lengths. 7. The Tensile Behavior of High-Strength Carbon Fibers. PubMed Langston, Tye 2016-08-01 Carbon fibers exhibit exceptional properties such as high stiffness and specific strength, making them excellent reinforcements for composite materials. However, it is difficult to directly measure their tensile properties and estimates are often obtained by tensioning fiber bundles or composites. While these macro scale tests are informative for composite design, their results differ from that of direct testing of individual fibers. Furthermore, carbon filament strength also depends on other variables, including the test length, actual fiber diameter, and material flaw distribution. Single fiber tensile testing was performed on high-strength carbon fibers to determine the load and strain at failure. Scanning electron microscopy was also conducted to evaluate the fiber surface morphology and precisely measure each fiber's diameter. Fiber strength was found to depend on the test gage length and in an effort to better understand the overall expected performance of these fibers at various lengths, statistical weak link scaling was performed. In addition, the true Young's modulus was also determined by taking the system compliance into account. It was found that all properties (tensile strength, strain to failure, and Young's modulus) matched very well with the manufacturers' reported values at 20 mm gage lengths, but deviated significantly at other lengths. PMID:27278219 8. Modeling variation in interaction strength between barnacles and fucoids. PubMed Kordas, Rebecca L; Dudgeon, Steve 2009-01-01 The strength by which species interact can vary throughout their ontogeny, as environments vary in space and time, and with the density of their populations. Characterizing strengths of interaction in situ for even a small number of species is logistically difficult and may apply only to those conditions under which the estimates were derived. We sought to combine data from field experiments estimating interaction strength of life stages of the barnacle, Semibalanus balanoides, on germlings of Ascophyllum nodosum, with a model that explored the consequences of variability at per capita and per population levels to the abundance of year-old algal recruits. We further simulated how this interaction affected fucoid germling abundance as the timing of their respective settlements varied relative to one another, as occurs regionally across the Gulf of Maine, USA. Juvenile S. balanoides have a weak estimated per capita effect on germlings. Germling populations are sensitive to variation in per capita effects of juvenile barnacles because of the typically large population sizes of the latter. However, high mortality of juvenile barnacles weakens the population interaction strength over time. Adult barnacles probably weakly facilitate fucoid germlings, but greater survival of adults sustains the strength of that interaction at the population level. Germling abundance is positively associated with densities of adult barnacles and negatively associated with that of juvenile barnacles. Metamorphosing cyprid larvae have the strongest per capita effect on germling abundance, but the interaction between the two stages is so short-lived that germling abundance is altered little. Variation in the timing of barnacle and A. nodosum settlement relative to one another had very little influence on the abundance of yearling germlings. Interactions between barnacles and germlings may influence the demographic structure of A. nodosum populations and the persistence of fucoid 9. Modeling variation in interaction strength between barnacles and fucoids PubMed Central Dudgeon, Steve 2009-01-01 The strength by which species interact can vary throughout their ontogeny, as environments vary in space and time, and with the density of their populations. Characterizing strengths of interaction in situ for even a small number of species is logistically difficult and may apply only to those conditions under which the estimates were derived. We sought to combine data from field experiments estimating interaction strength of life stages of the barnacle, Semibalanus balanoides, on germlings of Ascophyllum nodosum, with a model that explored the consequences of variability at per capita and per population levels to the abundance of year-old algal recruits. We further simulated how this interaction affected fucoid germling abundance as the timing of their respective settlements varied relative to one another, as occurs regionally across the Gulf of Maine, USA. Juvenile S. balanoides have a weak estimated per capita effect on germlings. Germling populations are sensitive to variation in per capita effects of juvenile barnacles because of the typically large population sizes of the latter. However, high mortality of juvenile barnacles weakens the population interaction strength over time. Adult barnacles probably weakly facilitate fucoid germlings, but greater survival of adults sustains the strength of that interaction at the population level. Germling abundance is positively associated with densities of adult barnacles and negatively associated with that of juvenile barnacles. Metamorphosing cyprid larvae have the strongest per capita effect on germling abundance, but the interaction between the two stages is so short-lived that germling abundance is altered little. Variation in the timing of barnacle and A. nodosum settlement relative to one another had very little influence on the abundance of yearling germlings. Interactions between barnacles and germlings may influence the demographic structure of A. nodosum populations and the persistence of fucoid 10. Compressive strength of carbon fibers SciTech Connect Prandy, J.M. ); Hahn, H.T. ) 1991-01-01 Most composites are weaker in compression than in tension, which is due to the poor compressive strength of the load bearing fibers. The present paper discusses the compressive strengths and failure modes of 11 different carbon fibers: PAN-AS1, AS4, IM6, IM7, T700, T300, GY-30, pitch-75, ultra high modulus (UHM), high modulus (HM), and high strength (HS). The compressive strength was determined by embedding a fiber bundle in a transparent epoxy matrix and testing in compression. The resin allows for the containment and observation of failure during and after testing while also providing lateral support to the fibers. Scanning electron microscopy (SEM) was used to determine the global failure modes of the fibers. 11. Lossy compression of weak lensing data SciTech Connect Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M. 2011-07-12 Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmic rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods. 12. Lossy compression of weak lensing data DOE PAGES Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; Rhodes, Jason; Massey, Richard; Dobke, Benjamin M. 2011-07-12 Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10-4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less 13. Spurious Shear in Weak Lensing with LSST SciTech Connect Chang, C.; Kahn, S.M.; Jernigan, J.G.; Peterson, J.R.; AlSayyad, Y.; Ahmad, Z.; Bankert, J.; Bard, D.; Connolly, A.; Gibson, R.R.; Gilmore, K.; Grace, E.; Hannel, M.; Hodge, M.A.; Jee, M.J.; Jones, L.; Krughoff, S.; Lorenz, S.; Marshall, P.J.; Marshall, S.; Meert, A. 2012-09-19 The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image {approx} 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to r {approx} 27.5, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. We identify and isolate the different sources of algorithm-independent, additive systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than {approx} 10{prime} in the single short exposures, which propagates into a spurious shear correlation function at the 10{sup -4}-10{sup -3} level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects. 14. Nanoelectromechanics of superconducting weak links (Review Article) NASA Astrophysics Data System (ADS) Parafilo, A. V.; Krive, I. V.; Shekhter, R. I.; Jonson, M. 2012-04-01 Nanoelectromechanical effects in superconducting weak links are considered. Three different superconducting devices are studied: (i) a single-Cooper-pair transistor, (ii) a transparent SNS junction, and (iii) a single-level quantum dot coupled to superconducting electrodes. The electromechanical coupling is due to electrostatic or magnetomotive forces acting on a movable part of the device. It is demonstrated that depending on the frequency of mechanical vibrations the electromechanical coupling could either suppress or enhance the Josephson current. Nonequilibrium effects associated with cooling of the vibrational subsystem or pumping energy into it at low bias voltages are discussed. 15. Thermodynamics of Weakly Measured Quantum Systems. PubMed Alonso, Jose Joaquin; Lutz, Eric; Romito, Alessandro 2016-02-26 We consider continuously monitored quantum systems and introduce definitions of work and heat along individual quantum trajectories that are valid for coherent superposition of energy eigenstates. We use these quantities to extend the first and second laws of stochastic thermodynamics to the quantum domain. We illustrate our results with the case of a weakly measured driven two-level system and show how to distinguish between quantum work and heat contributions. We finally employ quantum feedback control to suppress detector backaction and determine the work statistics. PMID:26967399 16. Thermodynamics of Weakly Measured Quantum Systems NASA Astrophysics Data System (ADS) Alonso, Jose Joaquin; Lutz, Eric; Romito, Alessandro 2016-02-01 We consider continuously monitored quantum systems and introduce definitions of work and heat along individual quantum trajectories that are valid for coherent superposition of energy eigenstates. We use these quantities to extend the first and second laws of stochastic thermodynamics to the quantum domain. We illustrate our results with the case of a weakly measured driven two-level system and show how to distinguish between quantum work and heat contributions. We finally employ quantum feedback control to suppress detector backaction and determine the work statistics. 17. Weak η production off the nucleon SciTech Connect Alam, M. Rafi; Athar, M. Sajjad; Alvarez-Ruso, L.; Vacas, M. J. Vicente 2015-05-15 The weak η-meson production off the nucleon induced by (anti)neutrinos is studied at low and intermediate energies, the range of interest for several ongoing and future neutrino experiments. We consider Born diagrams and the excitation of N{sup *} (1535)S{sub 11} and N{sup *} (1650)S{sub 11} resonances. The vector part of the N-S{sub 11} transition form factors has been obtained from the MAID helicity amplitudes while the poorly known axial part is constrained with the help of the partial conservation of the axial current (PCAC) and assuming the pion-pole dominance of the pseudoscalar form factor. 18. Electrostatic decay in a weakly magnetized plasma. PubMed Layden, A; Cairns, Iver H; Li, B; Robinson, P A 2013-05-01 The kinematics of the electrostatic (ES) decay of a Langmuir wave into a Langmuir wave and an ion sound wave are generalized to a weakly magnetized plasma. Unlike the unmagnetized case, ES decay in a magnetized plasma is always kinematically permitted and can produce daughter Langmuir waves with very small wave numbers, which we demonstrate by quasilinear simulations. The simulations further show that ES decay in magnetized plasmas is consistent with STEREO spacecraft observations of transversely polarized Langmuir waves in the solar wind. PMID:23683206 19. Supersymmetric Higgs Bosons in Weak Boson Fusion SciTech Connect Hollik, Wolfgang; Plehn, Tilman; Rauch, Michael; Rzehak, Heidi 2009-03-06 We compute the complete supersymmetric next-to-leading-order corrections to the production of a light Higgs boson in weak-boson fusion. The size of the electroweak corrections is of similar order as the next-to-leading-order corrections in the standard model. The supersymmetric QCD corrections turn out to be significantly smaller than expected and than their electroweak counterparts. These corrections are an important ingredient to a precision analysis of the (supersymmetric) Higgs sector at the LHC, either as a known correction factor or as a contribution to the theory error. 20. Thermodynamics of Weakly Measured Quantum Systems. PubMed Alonso, Jose Joaquin; Lutz, Eric; Romito, Alessandro 2016-02-26 We consider continuously monitored quantum systems and introduce definitions of work and heat along individual quantum trajectories that are valid for coherent superposition of energy eigenstates. We use these quantities to extend the first and second laws of stochastic thermodynamics to the quantum domain. We illustrate our results with the case of a weakly measured driven two-level system and show how to distinguish between quantum work and heat contributions. We finally employ quantum feedback control to suppress detector backaction and determine the work statistics. 1. LensTools: Weak Lensing computing tools NASA Astrophysics Data System (ADS) Petri, A. 2016-02-01 LensTools implements a wide range of routines frequently used in Weak Gravitational Lensing, including tools for image analysis, statistical processing and numerical theory predictions. The package offers many useful features, including complete flexibility and easy customization of input/output formats; efficient measurements of power spectrum, PDF, Minkowski functionals and peak counts of convergence maps; survey masks; artificial noise generation engines; easy to compute parameter statistical inferences; ray tracing simulations; and many others. It requires standard numpy and scipy, and depending on tools used, may require Astropy (ascl:1304.002), emcee (ascl:1303.002), matplotlib, and mpi4py. 2. Plasma Emission by Weak Turbulence Processes NASA Astrophysics Data System (ADS) Ziebell, L. F.; Yoon, P. H.; Gaelzer, R.; Pavan, J. 2014-11-01 The plasma emission is the radiation mechanism responsible for solar type II and type III radio bursts. The first theory of plasma emission was put forth in the 1950s, but the rigorous demonstration of the process based upon first principles had been lacking. The present Letter reports the first complete numerical solution of electromagnetic weak turbulence equations. It is shown that the fundamental emission is dominant and unless the beam speed is substantially higher than the electron thermal speed, the harmonic emission is not likely to be generated. The present findings may be useful for validating reduced models and for interpreting particle-in-cell simulations. 3. Naturalness and the weak gravity conjecture. PubMed Cheung, Clifford; Remmen, Grant N 2014-08-01 The weak gravity conjecture (WGC) is an ultraviolet consistency condition asserting that an Abelian force requires a state of charge q and mass m with q>m/m_{Pl}. We generalize the WGC to product gauge groups and study its tension with the naturalness principle for a charged scalar coupled to gravity. Reconciling naturalness with the WGC either requires a Higgs phase or a low cutoff at Λ∼qm_{Pl}. If neither applies, one can construct simple models that forbid a natural electroweak scale and whose observation would rule out the naturalness principle. 4. Improved Quantum Signature Scheme with Weak Arbitrator NASA Astrophysics Data System (ADS) Su, Qi; Li, Wen-Min 2013-09-01 In this paper, we find a man-in-the-middle attack on the quantum signature scheme with a weak arbitrator (Luo et al., Int. J. Theor. Phys., 51:2135, 2012). In that scheme, the authors proposed a quantum signature based on quantum one way function which contains both verifying the signer phase and verifying the signed message phase. However, after our analysis we will show that Eve can adopt different strategies in respective phases to forge the signature without being detected. Then we present an improved scheme to increase the security. 5. Weak Localization in few layer Black Phosphorus NASA Astrophysics Data System (ADS) Gillgren, Nathaniel; Shi, Yanmeng; Espiritu, Timothy; Watanabe, Kenji; Taniguchi, Takahashi; Lau, Chun Ning (Jeanie) Few-layer black phosphorus has recently attracted interest from the scientific community due to its high mobility, tunable band gap, and large anisotropy. Recent experiments have demonstrated that black phosphorus provides a promising candidate to explore the physics of 2D semiconductors. In this study we explore the magnetotransport of few-layer black phosphorus-boron nitride hetereostructure devices at low magnetic fields. Weak localization is observed at low temperatures. We extract the dephasing length and measure its dependence on temperature, carrier density and electric field. 6. Hollow vortices in weakly compressible flows NASA Astrophysics Data System (ADS) Krishnamurthy, Vikas; Crowdy, Darren 2015-11-01 In a two-dimensional, inviscid and steady fluid flow, hollow vortices are bounded regions of constant pressure with non-zero circulation. It is known that for an infinite row of incompressible hollow vortices, analytical solutions for the flow field and the shape of the hollow vortex boundary can be obtained using conformal mapping methods. In this talk, we show how to derive analytical expressions for a weakly compressible hollow vortex row. This is done by introducing a new method based on the Imai-Lamla formula. We will also touch upon how to extend these results to a von-Karman street of hollow vortices. 7. Parametric Amplification For Detecting Weak Optical Signals NASA Technical Reports Server (NTRS) Hemmati, Hamid; Chen, Chien; Chakravarthi, Prakash 1996-01-01 Optical-communication receivers of proposed type implement high-sensitivity scheme of optical parametric amplification followed by direct detection for reception of extremely weak signals. Incorporates both optical parametric amplification and direct detection into optimized design enhancing effective signal-to-noise ratios during reception in photon-starved (photon-counting) regime. Eliminates need for complexity of heterodyne detection scheme and partly overcomes limitations imposed on older direct-detection schemes by noise generated in receivers and by limits on quantum efficiencies of photodetectors. 8. Weak neutral currents and collapse initiated supernova SciTech Connect Wilson, J.R. 1993-03-19 Since 1974 the neutrino processes mediated by neutral currents have been a part of supernova (SN) modeling calculations. In this report only present day SN calculations will be discussed. First I will give brief description of the SN computer model and an outline of the explosion process as depicted by that model. Then I will discuss the role weak neutral current (WNC) processes play in this explosion process. Finally, I will discus inelastic scattering of tau neutrinos by heavy elements in WNC or Earth as a mechanism for measuring the mass of tau neutrino. 9. [The weakness of individual psychologic dream theory]. PubMed Strunz, F 1988-05-13 This article undertakes a critical evaluation of Adlerian dream theory. The main weakness of the theory is found to be its lack of an inherent instance of truth that shows the dreamer the way to a better and more feasible life style. Contemporary Adlerians' treatment of the master's dream dogmas and their practical use in psychotherapy are described. There seems to be a convergence movement of today's practical application methods of the dream in all psychotherapeutic schools. Adlerian dream interpretation in the original sense intended by Adler is practised nowhere by psychotherapists today and seems largely antiquated. 10. Precision frequency measurements with interferometric weak values NASA Astrophysics Data System (ADS) Starling, David J.; Dixon, P. Ben; Jordan, Andrew N.; Howell, John C. 2010-12-01 We demonstrate an experiment which utilizes a Sagnac interferometer to measure a change in optical frequency of 129 ± 7 kHz/Hz with only 2 mW of continuous-wave, single-mode input power. We describe the measurement of a weak value and show how even higher-frequency sensitivities may be obtained over a bandwidth of several nanometers. This technique has many possible applications, such as precision relative frequency measurements and laser locking without the use of atomic lines. 11. Failure behavior and constitutive model of weakly consolidated soft rock. PubMed Wang, Wei-ming; Zhao, Zeng-hui; Wang, Yong-ji; Gao, Xin 2013-01-01 Mining areas in western China are mainly located in soft rock strata with poor bearing capacity. In order to make the deformation failure mechanism and strength behavior of weakly consolidated soft mudstone and coal rock hosted in Ili No. 4 mine of Xinjiang area clear, some uniaxial and triaxial compression tests were carried out according to the samples of rocks gathered in the studied area, respectively. Meanwhile, a damage constitutive model which considered the initial damage was established by introducing a damage variable and a correction coefficient. A linearization process method was introduced according to the characteristics of the fitting curve and experimental data. The results showed that samples under different moisture contents and confining pressures presented completely different failure mechanism. The given model could accurately describe the elastic and plastic yield characteristics as well as the strain softening behavior of collected samples at postpeak stage. Moreover, the model could precisely reflect the relationship between the elastic modulus and confining pressure at prepeak stage. 12. Failure Behavior and Constitutive Model of Weakly Consolidated Soft Rock PubMed Central Wang, Wei-ming; Zhao, Zeng-hui; Wang, Yong-ji; Gao, Xin 2013-01-01 Mining areas in western China are mainly located in soft rock strata with poor bearing capacity. In order to make the deformation failure mechanism and strength behavior of weakly consolidated soft mudstone and coal rock hosted in Ili No. 4 mine of Xinjiang area clear, some uniaxial and triaxial compression tests were carried out according to the samples of rocks gathered in the studied area, respectively. Meanwhile, a damage constitutive model which considered the initial damage was established by introducing a damage variable and a correction coefficient. A linearization process method was introduced according to the characteristics of the fitting curve and experimental data. The results showed that samples under different moisture contents and confining pressures presented completely different failure mechanism. The given model could accurately describe the elastic and plastic yield characteristics as well as the strain softening behavior of collected samples at postpeak stage. Moreover, the model could precisely reflect the relationship between the elastic modulus and confining pressure at prepeak stage. PMID:24489511 13. Pollux: a stable weak dipolar magnetic field but no planet? NASA Astrophysics Data System (ADS) Aurière, Michel; Konstantinova-Antova, Renada; Espagnet, Olivier; Petit, Pascal; Roudier, Thierry; Charbonnel, Corinne; Donati, Jean-François; Wade, Gregg A. 2014-08-01 Pollux is considered as an archetype of a giant star hosting a planet: its radial velocity (RV) presents sinusoidal variations with a period of about 590 d, which have been stable for more than 25 years. Using ESPaDOnS and Narval we have detected a weak (sub-gauss) magnetic field at the surface of Pollux and followed up its variations with Narval during 4.25 years, i.e. more than for two periods of the RV variations. The longitudinal magnetic field is found to vary with a sinusoidal behaviour with a period close to that of the RV variations and with a small shift in phase. We then performed a Zeeman Doppler imaging (ZDI) investigation from the Stokes V and Stokes I least-squares deconvolution (LSD) profiles. A rotational period is determined, which is consistent with the period of variations of the RV. The magnetic topology is found to be mainly poloidal and this component almost purely dipolar. The mean strength of the surface magnetic field is about 0.7 G. As an alternative to the scenario in which Pollux hosts a close-in exoplanet, we suggest that the magnetic dipole of Pollux can be associated with two temperature and macroturbulent velocity spots which could be sufficient to produce the RV variations. We finally investigate the scenarii of the origin of the magnetic field which could explain the observed properties of Pollux. 14. Plasma waves downstream of weak collisionless shocks NASA Technical Reports Server (NTRS) Coroniti, F. V.; Greenstadt, E. W.; Moses, S. L.; Smith, E. J.; Tsurutani, B. T. 1993-01-01 In September 1983 the International Sun Earth Explorer 3 (ISEE 3) International Cometary Explorer (ICE) spacecraft made a long traversal of the distant dawnside flank region of the Earth's magnetosphere and had many encounters with the low Mach number bow shock. These weak shocks excite plasma wave electric field turbulence with amplitudes comparable to those detected in the much stronger bow shock near the nose region. Downstream of quasi-perpendicular (quasi-parallel) shocks, the E field spectra exhibit a strong peak (plateau) at midfrequencies (1 - 3 kHz); the plateau shape is produced by a low-frequency (100 - 300 Hz) emission which is more intense behind downstream of two quasi-perpendicular shocks show that the low frequency signals are polarized parallel to the magnetic field, whereas the midfrequency emissions are unpolarized or only weakly polarized. A new high frequency (10 - 30 kHz) emission which is above the maximum Doppler shift exhibit a distinct peak at high frequencies; this peak is often blurred by the large amplitude fluctuations of the midfrequency waves. The high-frequency component is strongly polarized along the magnetic field and varies independently of the lower-frequency waves. 15. Probing hysteretic elasticity in weakly nonlinear materials SciTech Connect Johnson, Paul A; Haupert, Sylvain; Renaud, Guillaume; Riviere, Jacques; Talmant, Maryline; Laugier, Pascal 2010-12-07 Our work is aimed at assessing the elastic and dissipative hysteretic nonlinear parameters' repeatability (precision) using several classes of materials with weak, intermediate and high nonlinear properties. In this contribution, we describe an optimized Nonlinear Resonant Ultrasound Spectroscopy (NRUS) measuring and data processing protocol applied to small samples. The protocol is used to eliminate the effects of environmental condition changes that take place during an experiment, and that may mask the intrinsic elastic nonlinearity. As an example, in our experiments, we identified external temperature fluctuation as a primary source of material resonance frequency and elastic modulus variation. A variation of 0.1 C produced a frequency variation of 0.01 %, which is similar to the expected nonlinear frequency shift for weakly nonlinear materials. In order to eliminate environmental effects, the variation in f{sub 0} (the elastically linear resonance frequency proportional to modulus) is fit with the appropriate function, and that function is used to correct the NRUS calculation of nonlinear parameters. With our correction procedure, we measured relative resonant frequency shifts of 10{sup -5} , which are below 10{sup -4}, often considered the limit to NRUS sensitivity under common experimental conditions. Our results show that the procedure is an alternative to the stringent control of temperature often applied. Applying the approach, we report nonlinear parameters for several materials, some with very small nonclassical nonlinearity. The approach has broad application to NRUS and other Nonlinear Elastic Wave Spectroscopy approaches. 16. Weak Decays of Excited B Mesons. PubMed Grinstein, B; Martin Camalich, J 2016-04-01 We investigate the decays of the excited (bq[over ¯]) mesons as probes of the short-distance structure of the weak ΔB=1 transitions. These states are unstable under the electromagnetic or strong interactions, although their widths are typically suppressed by phase space. Compared to the pseudoscalar B meson, the purely leptonic decays of the vector B^{*} are not chirally suppressed and are sensitive to different combinations of the underlying weak effective operators. An interesting example is B_{s}^{*}→ℓ^{+}ℓ^{-}, which has a rate that can be accurately predicted in the standard model. The branching fraction is B∼10^{-11}, irrespective of the lepton flavor and where the main uncertainty stems from the unmeasured and theoretically not well known B_{s}^{*} width. We discuss the prospects for producing this decay mode at the LHC and explore the possibility of measuring the B_{s}^{*}→ℓℓ amplitude, instead, through scattering experiments at the B_{s}^{*} resonance peak. PMID:27104698 17. Auroral weak double layers: A critical assessment NASA Astrophysics Data System (ADS) Koskinen, Hannu E. J.; Mälkki, Anssi M. Weak double layers (WDLs) were first observed in the mid-altitude auroral magnetosphere in 1976 by the S3-3 satellite. The observations were confirmed by Viking in 1986, when more detailed information of these small-scale plasma structures became available. WDLs are upward moving rarefactive solitary structures with negative electric potential. The potential drop over a WDL is typically 0-1 V with electric field pointing predominantly upward. The structures are usually found in relatively weak (≤2 kV) auroral acceleration regions where the field-aligned current is upward, but sometimes very small. The observations suggest that WDLs exist in regions of cool electron and ion background. Most likely the potential structures are embedded in the background ion population that may drift slowly upward. There have been several attempts for plasma physical explanation of WDLs but so far the success has not been very good. Computer simulations have been able to produce similar structures, but usually for somewhat unrealistic plasma parameters. A satisfactory understanding of the phenomenon requires consideration of the role of WDLs in the magnetosphere-ionosphere (MI) coupling, including the large-scale electric fields, both parallel and perpendicular to the magnetic field, and the Alfvén waves mediating the coupling. In this report we give a critical review of our present understanding of WDLs. We try to find out what can be safely deduced from the observations, what are just educated guesses, and where we may go wrong. 18. Weak scale from the maximum entropy principle NASA Astrophysics Data System (ADS) Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu 2015-03-01 The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass. 19. Weak Decays of Excited B Mesons. PubMed Grinstein, B; Martin Camalich, J 2016-04-01 We investigate the decays of the excited (bq[over ¯]) mesons as probes of the short-distance structure of the weak ΔB=1 transitions. These states are unstable under the electromagnetic or strong interactions, although their widths are typically suppressed by phase space. Compared to the pseudoscalar B meson, the purely leptonic decays of the vector B^{*} are not chirally suppressed and are sensitive to different combinations of the underlying weak effective operators. An interesting example is B_{s}^{*}→ℓ^{+}ℓ^{-}, which has a rate that can be accurately predicted in the standard model. The branching fraction is B∼10^{-11}, irrespective of the lepton flavor and where the main uncertainty stems from the unmeasured and theoretically not well known B_{s}^{*} width. We discuss the prospects for producing this decay mode at the LHC and explore the possibility of measuring the B_{s}^{*}→ℓℓ amplitude, instead, through scattering experiments at the B_{s}^{*} resonance peak. 20. Oxygen consumption in weakly electric Neotropical fishes. PubMed Julian, David; Crampton, William G R; Wohlgemuth, Stephanie E; Albert, James S 2003-12-01 Weakly electric gymnotiform fishes with wave-type electric organ discharge (EOD) are less hypoxia-tolerant and are less likely to be found in hypoxic habitats than weakly electric gymnotiforms with pulse-type EOD, suggesting that differences in metabolism resulting from EOD type affects habitat choice. Although gymnotiform fishes are common in most Neotropical freshwaters and represent the dominant vertebrates in some habitats, the metabolic rates of these unique fishes have never been determined. In this study, O(2) consumption rates during EOD generation are reported for 34 gymnotiforms representing 23 species, all five families and 17 (59%) of the 28 genera. Over the size range sampled (0.4 g to 125 g), O(2) consumption of gymnotiform fishes was dependent on body mass, as expected, fitting a power function with a scaling exponent of 0.74, but the O(2) consumption rate was generally about 50% of that expected by extrapolation of temperate teleost metabolic rates to a similar ambient temperature (26 degrees C). O(2) consumption rate was not dependent on EOD type, but maintenance of "scan swimming" (continuous forwards and backwards swimming), which is characteristic only of gymnotiforms with wave-type EODs, increased O(2) consumption 2.83+/-0.49-fold (mean+/-SD). This suggests that the increased metabolic cost of scan swimming could restrict gymnotiforms with wave-type EODs from hypoxic habitats.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6997205018997192, "perplexity": 4571.730651366622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00286-ip-10-145-167-34.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A1108.34033
× ## Forced singular oscillators and the method of lower and upper solutions.(English)Zbl 1108.34033 Summary: We study the existence of positive periodic solutions of the second-order differential equation $u''+g(u)u'+f(t,u)=h(t)$ where $$f(t,\cdot)$$ has a singularity of repulsive type at the origin. We use the method of lower and upper solutions. ### MSC: 34C25 Periodic solutions to ordinary differential equations 34B15 Nonlinear boundary value problems for ordinary differential equations Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7802196741104126, "perplexity": 345.8003288090364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00061.warc.gz"}
https://www.albert.io/learn/ap-physics-1-and-2/question/four-particle-entropy
Limited access There are four particles of a substance distributed between two chambers of container that makes up an isolated system. The particles are in constant motion. Which of the following is the probability of having all the particles on the left side of the container and how does this relate to entropy? A There is a 33% probability of finding all the particles on the left because they can all a be on the left, all on the right or split. This is entropy because entropy is the probability of a certain state of a substance. B There is a 6.25% probability of finding all the particles on the left because this is one of 16 possible microstates. This shows a low entropy because low probability is a highly ordered system. C There is a 50% probability of finding all the particles on the left because each particle is either on the right or on the left. This middle value for probability means that the entropy is the same as the probability that just one particle that could be either right or left. D The probability of finding all the particles on the left is not related to entropy because change in entropy is the heat energy transferred divided by average temperature. Select an assignment template
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329402446746826, "perplexity": 172.87678374164406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00618-ip-10-233-31-227.ec2.internal.warc.gz"}
https://afonsobandeira.wordpress.com/2010/08/02/bandlimited-functions-and-the-whittaker-shannon-kotelnikov-sampling-theorem/
# Bandlimited Functions and the Whittaker-Shannon-Kotel’nikov Sampling Theorem During the next weeks I will be writing a series of posts about my pre-print with Daniel Abreu, it essentially has to do with Sampling Theory, Frames and Applied Harmonic Analysis. The first post of this series will be a more elementary one, and will be devoted to introducing bandlimited function and presenting one of the classical results in this subject, the Shannon Sampling Theorem. In Signal Processing it is usual to represent a signal by a function depending on (usually considered to be time). It is often important to consider the same signal on the frequency side, this is achieved by the Fourier Transform of the signal, Roughly speaking, the value of stands for how much the frequency is present in . The Fourier Transform lies in the heart of the Fourier Analysis, and is a mathematical object with several beautiful (and sometimes amazing) proprieties. Two of them are very important in what follows: – The Plancharel Theorem that essentially states that the Fourier Transform is an isometry in , meaning that: . and, – The inversion formula, that essentially gives the relation In several applications it is reasonable to assume that a signal can not have frequencies of arbitrarily big absolute value (a simple illustration of this is the fact that the human ear can only listen sounds whose frequency lie on a certain band). For this reason one is interested in studying functions whose frequency is supported on , for some (we will restrict ourselves to to make the exposition cleaner, although all results below can be easily generalized to a general band space). This motivates the next definition. Definition 1: Bandlimited functions The space of bandlimited functions is the space of all such that its Fourier Transform is supported on , i.e. such that The space of bandlimited functions is also tightly connected to a space of entire functions by the Paley-Wiener Theorem. Now, we are ready to state and prove the Whittaker-Shannon-Kotel’nikov Sampling Theorem. Theorem 1: Whittaker-Shannon-Kotel’nikov Let be a Bandlimited function (see Definition 1), i.e.: . Then Moreover Proof: Let be given. By the Plancharel Theorem we have . As is an orthonormal basis of (Fourier Series are based on this fact) and is supported on , then we can write for some coefficients , where stands for the characteristic function of the set . Using the inversion formula and performing simple calculations we obtain Setting we obtain giving the first equality of the Theorem. The second part of the theorem is obtained with a direct application of the Plancharel Theorem and the fact that is an orthonormal basis of . This theorem shows that is a sequence where we can sample functions in in the sense that, if we know the values of a bandlimited function at the function is uniquely determined and can be reconstructed by the formula in Theorem 1, this sampling rate is known as the Nyquist Rate. One question that naturally arises is if there exists a sampling sequence “smaller” than (or, in other words, the optimality of the Nyquist Rate). To properly ask this question one needs to define what is smaller than in infinite sets and one needs to define sampling sequence. This will be done in the next post, and an answer to this question, due to Landau, will be discussed in future posts as well. As this is my first Math blog post I would much appreciate comments about it. Was it too elementary? Did I lost too much time on basic stuff? Was it too fast to follow? too slow? too long? too short? Was something not clear enough? Answers to these questions will help me writing better posts in the future. ## 7 thoughts on “Bandlimited Functions and the Whittaker-Shannon-Kotel’nikov Sampling Theorem” 1. I think your speed is good. Maybe in the future you will need/be able to write longer posts but I liked this one. I already knew most of the mathematics presented, so I can’t give good answers to the other questions. Just one observation: I believe that not every function [;L^2(R);] has its Fourier transform defined (not by the integral anyway). So in definition 1, you are only considering functions for which the integral in the definition of the Fourier transform converges? 1. Joel, Thank you for your suggestion. About the definition of the Fourier Transform, to be rigorous one needs to define it for functions in [;L^1(\mathbb{R})\cap L^2(\mathbb{R});] by the integral (because the integral converges if [;f\in L^1(\mathbb{R});]) and then using the Plancharel theorem together with the fact that [;L^1(\mathbb{R})\cap L^2(\mathbb{R});] is dense in [;L^2(\mathbb{R});] one can extend it to a unitary operator in [;L^2(\mathbb{R});]. 2. Thiago Pereira says: I really liked the question of whether a smaller sampling sequence exists. By assuming the function band limited in frequency, doesn’t it imply it is not limited in time? So by the hypothesis of the theorem we get a sampling rate, but we always obtain an infinite number of samples anyway. Looking forward to the next post. 1. Thiago, Thank you for your comment. That is a very good point. Roughly the uncertainty principle (or one of the uncertainty principles) tells us that a function cannot be both concentrated on time and on frequency and, in particular, that if it is band-limited it has to unlimited on time, (this concentration ideas will play a vital role in posts to come). It is true that a sampling sequence has to consist of an infinite set of points but even so we can talk on smaller sets in a density prespective. For example one agrees that the set of the integers is (somehow) twice as big as the set of the even numbers. In a future post I will give a formal definition of density. I will write the next post during the weekend. Afonso 3. El Macho says: Today I was reading Shannon’s mathematical theory of communication and his notation wasn’t as clear as yours. I definitely appreciate your elementary translation and thank you for your efforts.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556142091751099, "perplexity": 344.6473647209565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513478.11/warc/CC-MAIN-20171211105804-20171211125804-00119.warc.gz"}
https://testbook.com/blog/interest-quiz-for-ssc-cgl-railways-rrb/
# Interest Quiz for SSC CGL & Railways RRB 1 Save Here is an interest quiz for upcoming exams like SSC CGL & Railways. This quiz contains important questions matching the exact pattern and syllabus of upcoming exams. Make sure you attempt today’s Quant Quiz for Upcoming Exams to check your preparation level. Interest Quiz for SSC CGL & Railways Que. 1 A sum of money placed at compound interest doubles itself in 4 years. In how many years will it amount to four times itself? 1. 12 years 2. 13 years 3. 8 years 4. 16 years Que. 2 A man invested 1/3 of his capital at 7%, ¼ at 8% and the remainder at 10%. If his annual income is Rs. 561, the capital is 1. Rs. 5400 2. Rs.  6000 3. Rs. 6600 4. Rs. 7200 Que. 3 A certain sum invested at 4% per annum compound interest, compounded half-yearly, amounts to Rs 7803 after the end of one year. The sum is? 1. 7000 2. 7200 3. 7500 4. 7700 Que. 4 At what rate per cent per annum will a sum of Rs.1000 amount to Rs.1,102.50 in 2 years at compound interest? 1. 5 2. 5.5 3. 6 4. 6.5 Que. 5 Rs. 1000 amounts to Rs. 1420 in 3 years at simple interest. If the interest rate is increased by 3% of itself, it would amount to? 1. Rs. 1,182.8 2. Rs. 1,432.6 3. Rs. 1,056.6 4. Rs. 1,112 Que. 6 A person takes a loan of Rs. 10,000 partly from a bank at 8% p.a. and remaining from another bank at 10% p.a. He pays a total interest of Rs. 950 per annum. Amount of loan taken from the first bank (in Rs.) is 1. 2500 2. 5200 3. 2050 4. 5020 Que. 7 If Rs. 5,000 becomes Rs. 5,700 in a year’s time, what will Rs. 7,000 become at the end of 5 years at the same rate of simple interest? 1. Rs. 10,500 2. Rs. 11,900 3. Rs. 12,700 4. Rs. 7,700 Que. 8 A bank gives compound interest on deposits at the rate of 5% for the first year, 6% for the second year and 10% for the third year. If a deposit amounts to Rs.12,243 at the end of third year, then the initial deposit (principal) was 1. Rs.11500 2. Rs.10000 3. Rs.10500 4. Rs.11000 Que. 9 In certain years a sum of money is doubled itself at 6.25% simple interest p.a then the required time in years will be? 1. 12.5 2. 16 3. 8 4. $$10\frac{2}{3}$$ Que. 10 A man gave 50% of his savings of Rs. 84,100 to his wife and divided the remaining sum among his two sons A and B of 15 and 13 years of age respectively. He divided it in such a way that each of his sons, when they attain the age of 18 years, would receive the same amount at 5% compound interest per annum. The share of B was 1. Rs. 20,000 2. Rs. 20,050 3. Rs. 22,000 4. Rs. 22,050
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38116347789764404, "perplexity": 3052.0876420551886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00288.warc.gz"}
http://mathhelpforum.com/calculus/51004-solve-basic-trigonometric-equation-complex-plane.html
# Math Help - solve basic trigonometric equation in complex plane 1. ## solve basic trigonometric equation in complex plane $\sin z = \sin c \hfill \\$ $z = x + iy \hfill \\$ $c = a + ib \hfill \\$ $a,b,x,y \in \mathbb{R} \hfill \\$ ${\text{seperating the real and imaginary part I have the following system to solve}} \hfill \\$ $\left\{ \begin{gathered} \sin x\cosh y = \sin a\cosh b \hfill \\ \cos x\sinh y = \cos a\sinh b \hfill \\ \end{gathered} \right. \hfill \\$ But I don't see how I can solve this. Any idea? 2. You need to know this identity. $\sin (x + yi) = \sin (x)\cosh (y) + i\left[ {\cos (x)\sinh (y)} \right]$ 3. I do know this formula; this is how I reached the system I need to solve. This is where the problem begins, at least for me. civodul 4. Originally Posted by civodul I do know this formula; this is how I reached the system I need to solve. Well in that case, I would square both; add them together; use identities to eliminate one variable. 5. If $\sin(z)=\sin(c)$ then why can't we just write: $z=-i\log\bigg(iw+\sqrt{1-w^2}\bigg),\quad w=\sin(c)$ 6. Why do you not stay with your original question? Of course there are many ways to solve this question! Most mathematicians would have gone for $\sin (z) = \frac{{e^{iz} - e^{ - iz} }}{{2i}}$. 7. Hi Plato, Shawsend,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592348337173462, "perplexity": 840.6636832945626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444493.40/warc/CC-MAIN-20141017005724-00020-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/position-operator.883918/
# B Position operator 1. Sep 2, 2016 ### Naman Jain Kota Well i am noobie to quantum physics so i matbe totally incorrect so please bear with me. I had question how is position operator defined mathematically. I was reading the momentum position commutator from http://ocw.mit.edu/courses/physics/...pring-2013/lecture-notes/MIT8_04S13_Lec05.pdf (page 2 of pdf) They have used position operator = eigenvalue (ie position itself) times wavefunction http://ocw.mit.edu/courses/physics/...pring-2013/lecture-notes/MIT8_04S13_Lec05.pdf But i doubt that the relation will be valid only for delta wavefunction (paralell to as momentum relation is valid in case of eix. I understood it as momentum is well defined in only that case so similarly position will be defined clearly in delta function only.) So am i correct, also point pitfalls in my understandings 2. Sep 2, 2016 ### vanhees71 In the wave-mechanics formulation (the position representation of Hilbert space) you associate with the Hilbert space vector a function $\psi(\vec{x})$, which is square integrable, i.e., $$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\psi(\vec{x})|^2$$ exists. Such functions build a Hilbert space (let's leave out the mathematicle subtlties here), the space of square integrable functions. The scalar product is defined by $$\langle \psi_1|\psi_2 \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \psi_1^*(\vec{x}) \psi_2(\vec{x}),$$ which always exists for $\psi_1$ and $\psi_2$ being square integrable. Now you can easily check that the operators $\hat{x}_i$ and $\hat{p}_i$, defined by $$\hat{x}_i \psi(\vec{x})=x_i \psi(\vec{x}), \quad \hat{p}_i \psi(\vec{x})=\frac{\hbar}{\mathrm{i}} \frac{\partial}{\partial x_i} \psi(\vec{x})$$ obey the commutator relations for position and momentum, $$[\hat{x}_i,\hat{x}_j]=0, \quad [\hat{p}_i,\hat{p}_j]=0, \quad [\hat{x}_i,\hat{p}_j]=\mathrm{i} \hbar.$$ Now the eigenvalue problem for such operators is a bit more complicated than for operators in a finite-dimensional vector space. Take the momentum operator as an example. The eigenvalue equation reads $$\hat{p}_j u_{\vec{p}}(\vec{x})=p_j u_{\vec{p}}(\vec{x}).$$ You can solve this equation, using the definition of the momentum operator easily to be $$u_{\vec{p}}(\vec{x})=N \exp \left (\frac{\mathrm{i} \vec{p} \cdot \vec{x}}{\hbar} \right), \quad N=\text{const}.$$ But now you see that for any $\vec{p} \in \mathbb{R}^3$ this is not a square-integrable function, i.e., it's not in the Hilbert space! Rather it's a distribution (or generalized function). You can formaly evaluate a scalar product of two such generalized eigenfunctions to give $$\langle u_{\vec{p}}|u_{\vec{p}'} \rangle = |N|^2 (2 \pi)^3 \hbar^3\delta^{(3)}(\vec{p}-\vec{p}').$$ So it's convenient to define $$N=\frac{1}{(2 \pi \hbar)^{3/2} }.$$ A similar argument leads to the "position eigenvectors". The position eigenvector to eigenvalue $\vec{x}_0$ must be $$u_{\vec{x}_0}(\vec{x})=\delta^{(3)}(\vec{x}-\vec{x}_0).$$ So again it's a distribution. You cannot even square it! Nevertheless you can use the generalized eigenvectors for very important calculations. E.g., if you have given a particle to be in a state represented by the (square integrable!) wave function $\psi$ and want to know the probaility distribution for momentum, you just evaluate formally $$\tilde{\psi}(\vec{p})=\langle u_{\vec{p}}|\psi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} u_{\vec{p}}^*(\vec{x}) \psi(\vec{x}),$$ which just is the Fourier transform. If $\psi$ is normalized to 1, you get the momentum-probability distribution via Born's rule as $$P(\vec{p})=|\tilde{\psi}(\vec{p})|^2.$$ Draft saved Draft deleted Similar Discussions: Position operator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579464197158813, "perplexity": 821.7819776069126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00556.warc.gz"}
https://www.physicsforums.com/threads/kline-calculus-problems-simple-derivatives-and-marginal-cost.593627/
# Kline Calculus Problems - Simple Derivatives and Marginal Cost 1. Apr 5, 2012 ### ghostskwid I had questions on 2 Problems in the Text: 1. The total cost C of producing x units of some item is a function of x. Economists use the term marginal cost for the rate of change of C with respect to x. Suppose that: C = 5x^2 + 15x + 200 What is the marginal cost when x = 15? Would this marginal cost be the cost of the 16th unit? (((I understand how to take the derivative and find dC/dx. However, I am unsure as to why this represents to cost of the 16th unit. Also, can someone explain to me in simple terms what the derivative of C represents?))) 2. Using the definition of marginal cost in the preceding exercise, suppose that the cost C of producing x units of a toy is C = 3x^2 - 4x + 5. What is the marginal cost at any value of x? Would the marginal cost necessarily increase with x in any realistic situation? (((Why doesn't the marginal cost always increase with x in any realistic situation?))) Thanks! 2. Apr 5, 2012 ### chiro Hey ghostskwid and welcome to the forums. The derivative means the instantaneous rate of change of something with respect to another. In this case it represents how C changes with x at that point. Think of looking at the slope of the function C between a point x and x + dx and what happens is that dx gets smaller and smaller to goes to zero but isn't zero! It's a weird thing to understand but that's the best way to describe it. Basically in this context if the slope is increasing then the cost is increasing for every x which means there will be a relative increase in cost to produce more stuff and if it decreases then it will cost relatively less. The thing that businesses want to do is create more things at the cheapest possible rate which means that if we have any dC/dx where it is negative then this means that the businesses can create or produce more things without having to spend as much for each new piece of stuff (in other words it's less per unit to produce more stuff than it is to produce the existing stuff). Think about a factory creates say a lot of cars or something on an assembly line. They have to pay for running the assembly line, wages, and all that stuff as well as for the materials but once they produce enough to cover things like wages and operating the factory, then it won't cost them as much to produce anything more and this is what businesses with factories want because they will sell their stuff at the same price usually which means they make a lot more profit when they make more stuff if the dC/dx is negative. By finding the turning point where dC/dx is zero at a minimum, this says for the business what's the best amount to produce to maximize profit in one sense. 3. Apr 5, 2012 ### ghostskwid Thanks that helps. Why is the marginal cost at 15 the actual cost of unit 16? Also in the second problem could the marginal cost ever be negative? 4. Apr 5, 2012 ### DonAntonio It seems to be that (1) asks the following: the marginal cost at x = 15 is $C'(15)=10\cdot 15+15=165$ . On the other hand, the cost of the 16th product seems to be C(16)-C(15) = the cost of making 16 items minus the cost of making 15 items, and this gives 170, so no. DonAntonio Disclaimer: the person that wrote the above answer is a pure mathematician and thus his messing with mathematicial economics and/or financial stuff must be taken with due care. 5. Apr 5, 2012 ### chiro I'm not a pure mathematician: my background is in computer programming and I will graduate this year with a double major in statistics and applied mathematics. 6. Apr 5, 2012 ### HallsofIvy Staff Emeritus chiro, I believe Don Antonio was referring to himself. ghostskwid, yes, the marginal cost of the 16th item is just the cost of that item. Since your function, C(x), gives the total cost of manufacturing x items, the marginal cost of the 16th item is C(16)- C(15). Since you refer to both "simple derivatives" and "marginal cost" in the title of this thread it might be good to point out that the marginal cost of the "x" item is $$C(x+ 1)- C(x)= \lim_{h\to 1}\frac{C(x+h)- C(x)}{h}$$ while the derivative is $$\lim_{h\to 0}\frac{C(x+h)- C(x)}{h}$$ Of course, the "h" going to 1 in the denominator raises much less theoretical issues than it going to 0! 7. Apr 5, 2012 ### chiro Thankyou HallsOfIvy for that. Hopefully the OP will reply so that everything gets cleared up. 8. Apr 5, 2012 ### ghostskwid I get 165 when calculating the marginal cost at \$15 and 170 when calculating the cost of the 16th product ((C(16) - C (15))). Kline reports the answer as yes it will be. Could you explain your tex block. How do they differ? I thought the marginal cost was simply the derivative of the function. Thanks Similar Discussions: Kline Calculus Problems - Simple Derivatives and Marginal Cost
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965686678886414, "perplexity": 665.424439664367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105326.6/warc/CC-MAIN-20170819070335-20170819090335-00568.warc.gz"}
http://scientificlib.com/en/Mathematics/LX/BaerRing.html
Hellenica World # . In abstract algebra and functional analysis, Baer rings, Baer *-rings, Rickart rings, Rickart *-rings, and AW* algebras are various attempts to give an algebraic analogue of von Neumann algebras, using axioms about annihilators of various sets. Any von Neumann algebra is a Baer *-ring, and much of the theory of projections in von Neumann algebras can be extended to all Baer *-rings, For example, Baer *-rings can be divided into types I, II, and III in the same way as von Neumann algebras. In the literature, left Rickart rings have also been termed left PP-rings. ("Principal implies projective": See definitions below.) Definitions • An idempotent in a ring is an element e with e2 = e. • The left annihilator of a set $$X \subseteq R$$ is $$\{r\in R\mid rX=\{0\}\}$$ • A (left) Rickart ring is a ring satisfying any of the following conditions: 1. the left annihilator of any single element of R is generated (as a left ideal) by an idempotent element. 2. (For unital rings) the left annihilator of any element is a direct summand of R. 3. All principal left ideals (ideals of the form Rx) are projective R modules.[1] • A Baer ring has the following definitions: 1. The left annihilator of any subset of R is generated (as a left ideal) by an idempotent element. 2. (For unital rings) The left annihilator of any subset of R is a direct summand of R.[2] For unital rings, replacing all occurrences of 'left' with 'right' yields an equivalent definition, that is to say, the definition is left-right symmetric.[3] In operator theory, the definitions are strengthened slightly by requiring the ring R to have an involution $$*:R\rightarrow R$$. Since this makes R isomorphic to its opposite ring Rop, the definition of Rickart *-ring is left-right symmetric. • A projection in a *-ring is an idempotent p that is self adjoint (p*=p). • A Rickart *-ring is a *-ring such that left annihilator of any element is generated (as a left ideal) by a projection. • A Baer *-ring is a *-ring such that left annihilator of any subset is generated (as a left ideal) by a projection. • An AW* algebra, introduced by Kaplansky (1951), is a C* algebra that is also a Baer *-ring. Examples • Since the principal left ideals of a left hereditary ring or left semihereditary ring are projective, it is clear that both types are left Rickart rings. This includes von Neumann regular rings, which are left and right semihereditary. If a von Neumann regular ring R is also right or left self injective, then R is Baer. • Any semisimple ring is Baer, since all left and right ideals are summands in R, including the annihilators. • Any domain is Baer, since all annihilators are $$\{0\}$$except for the annihilator of 0, which is R, and both $$\{0\}$$and R are summands of R. • The ring of bounded linear operators on a Hilbert space are a Baer ring and is also a Baer *-ring with the involution * given by the adjoint. • von Neumann algebras are examples of all the different sorts of ring above. Properties The projections in a Rickart *-ring form a lattice, which is complete if the ring is a Baer *-ring. Notes ^ Rickart rings are named after Rickart (1946) who studied a similar property in operator algebras. This "principal implies projective" condition is the reason Rickart rings are sometimes called PP-rings. (Lam 1999) ^ This condition was studied by Reinhold Baer (1952). ^ T.Y. Lam (1999), "Lectures on Modules and Rings" ISBN 0-387-98428-3 pp.260 References Baer, Reinhold (1952), Linear algebra and projective geometry, Boston, MA: Academic Press, ISBN 978-0-486-44565-6, MR0052795 Berberian, Sterling K. (1972), Baer *-rings, Die Grundlehren der mathematischen Wissenschaften, 195, Berlin, New York: Springer-Verlag, ISBN 978-3-540-05751-2, MR0429975 Kaplansky, Irving (1951), "Projections in Banach algebras", Annals of Mathematics. Second Series 53 (2): 235–249, doi:10.2307/1969540, ISSN 0003-486X, JSTOR 1969540, MR0042067 Kaplansky, I. (1968), Rings of Operators, New York: W. A. Benjamin, Inc. Rickart, C. E. (1946), "Banach algebras with an adjoint operation", Annals of Mathematics. Second Series 47 (3): 528–550, doi:10.2307/1969091, JSTOR 1969091, MR0017474 L.A. Skornyakov (2001), "Regular ring (in the sense of von Neumann)", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104 L.A. Skornyakov (2001), "Rickart ring", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104 J.D.M. Wright (2001), "AW* algebra", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1556080104 Mathematics Encyclopedia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502865672111511, "perplexity": 1282.5645463740987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816068.93/warc/CC-MAIN-20180224231522-20180225011522-00424.warc.gz"}
https://jira.lsstcorp.org/browse/DM-7541?focusedCommentId=79207&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
# DLP-579 may be at the wrong level in PMCS XMLWordPrintable #### Details • Type: Bug • Status: Won't Fix • Resolution: Done • Fix Version/s: None • Component/s: • Labels: • Team: System Management #### Description It looks like DLP-579 may be at the wrong level in PMCS, one level too high at 02C.09 instead of a third-level branch. Unlike the case for DM-7540, though, I think the problem here is that there is currently no suitable third-level branch of 02C.09 for this milestone. Perhaps that means that something is missing? Or is this really an 02C.10 milestone? #### Attachments 1. dlp-579-PMCS.png 98 kB #### Activity Hide Wil O'Mullane added a comment - I can not find DLP-579 in Primavera.. Are you sure this should be assigned to me ? Show Wil O'Mullane added a comment - I can not find DLP-579 in Primavera.. Are you sure this should be assigned to me ? Hide Gregory Dubois-Felsmann added a comment - - edited Here it is, in the April 2017 baseline. (See uploaded screen shot.) Show Gregory Dubois-Felsmann added a comment - - edited Here it is, in the April 2017 baseline. (See uploaded screen shot.) Hide Gregory Dubois-Felsmann added a comment - I imagine Tim just transferred it to you (Wil) from Jacek by default, as Jacek's successor. Show Gregory Dubois-Felsmann added a comment - I imagine Tim just transferred it to you (Wil) from Jacek by default, as Jacek's successor. Hide Gregory Dubois-Felsmann added a comment - It seems like it really should be Frossie's to clean up, as it seems more naturally related to 02C.10, but I thought (back then) that Jacek should be aware of any milestone being moved from one 2nd-level WBS to another. Show Gregory Dubois-Felsmann added a comment - It seems like it really should be Frossie's to clean up, as it seems more naturally related to 02C.10, but I thought (back then) that Jacek should be aware of any milestone being moved from one 2nd-level WBS to another. Hide Wil O'Mullane added a comment - ok I had a filter in PMCS now I see it - it is owned by Frossie in there .. we will address all of these milestones next week - so this is a bit of an outlier really. Many milestones are wrong .. this particular is completed (why i did not see it in PMCS)... so I see nothing to do here anymore. Show Wil O'Mullane added a comment - ok I had a filter in PMCS now I see it - it is owned by Frossie in there .. we will address all of these milestones next week - so this is a bit of an outlier really. Many milestones are wrong .. this particular is completed (why i did not see it in PMCS)... so I see nothing to do here anymore. Hide Gregory Dubois-Felsmann added a comment - Agreed. Show Gregory Dubois-Felsmann added a comment - Agreed. #### People Assignee: Wil O'Mullane Reporter: Gregory Dubois-Felsmann Watchers: Gregory Dubois-Felsmann, Wil O'Mullane
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559547662734985, "perplexity": 4019.0787467084324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00792.warc.gz"}
https://danilafe.com/post/3
This is the third post in a series I'm writing about Chip-8 emulation. If you want to see the first one, head here. In the previous part of this tutorial, we created a type to represent a basic Chip-8 machine. However, we've done nothing to make it behave like one! Let's start working on that. ### Initializing If you're writing your emulator in C / C++, simply stating that we have a variable such as stackp (stack pointer) will not give you a clean value. Memory for these variables is allocated and never cleaned up, so they can (and do) contain gibberish. For some parts of our program, it's necessary to initialize everything we have to 0 or another meaningful value. A prime example is that of the timers, especially the sound timer. The sound timer beeps until its value is 0, and if it starts with a value of a few million, the game will make a sound without having asked for any. The stack pointer also needs to be set to 0. As it's proper style to initialize everything, we'll do just that. chip->pc = 0x200; chip->i = 0; chip->stackp = 0; chip->delay_timer = 0; chip->sound_timer = 0; for(int i = 0; i < 16; i++) chip->v[i] = 0; for(int i = 0; i < 4096; i++) chip->memory[i] = 0; for(int i = 0; i < 16; i++) chip->stack[i] = 0; for(int i = 0; i < (64 *32); i++) chip->display[i] = 0; I set to program counter to 0x200 because, according to the specification on the Wiki page for Chip-8, most programs written for the original system begin at memory location 512 (0x200) ### The First Steps We are now ready to start stepping the emulator. I'm going to omit loading a file into memory and assume that one is already loaded. However, I will remind the reader to load the programs into memory starting at address 0x200 when they write their file loading code. According to the Chip-8 Wiki, CHIP-8 has 35 opcodes, which are all two bytes long and stored big-endian. So, the first thing we want to do in our step code is to combine the next two bytes at the program counter into a single short. unsigned short int opcode = (chip->memory[chip->pc] << 8) | (chip->memory[chip->pc + 1]); In this piece of code, we take the byte at the program counter, and shift it to the left 8 bits (the size of a byte). We then use binary OR to combine it with the next byte. Now that we have an opcode, we need to figure out how to decode it. Most opcodes are discerned by the first hexadecimal digit that makes them up (for example, the F in 0xFABC), so let's first isolate that first digit. We can do that by using the binary AND operation on the program counter, with the second operand being 0xF000. unsigned short int head = opcode & 0xf000; If we had an opcode with a value such as 0x1234, running 0x1234 & 0xF000 will give us 0x1000. This is exactly what we want to tell each opcode apart. We can now start implementing the instructions! The first instruction listed on the Wiki page is: 0x00E0: Clears the screen. So, in our step code, we need to check if the opcode starts with a 0. If it does, then the whole head variable will be a 0. if(head == 0) { ... } Next, though, we have a problem. There are two codes on the Wiki page that start with 0: 0x00E0: Clears the screen. 0x00EE: Returns from a subroutine. So now, we need to check if the ending of the instruction is 0xE0 and not 0xEE. if((opcode & 0x00ff) == 0xe0) { ... } Now, we can just clear the screen the way we did when we initialized. for(int i = 0; i < (64 *32); i++) chip->display[i] = 0; All together, our code ends up being: // Get the Opcode unsigned short int opcode = (chip->memory[chip->pc] << 8) | (chip->memory[chip->pc + 1]); // Decode opcode
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24410586059093475, "perplexity": 1339.9210605308235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999946.25/warc/CC-MAIN-20190625192953-20190625214953-00004.warc.gz"}
https://www.varsitytutors.com/hotmath/hotmath_help/topics/rotations.html
# Rotations A rotation is a transformation in a plane that turns every point of a preimage through a specified angle and direction about a fixed point. The fixed point is called the center of rotation . The amount of rotation is called the angle of rotation and it is measured in degrees. Use a protractor to measure the specified angle counterclockwise. Some simple rotations can be performed easily in the coordinate plane using the rules below. ### Rotation by $90°$ about the origin: A rotation by $90°$ about the origin is shown. The rule for a rotation by $90°$ about the origin is $\left(x,y\right)\to \left(-y,x\right)$ . ### Rotation by $180°$ about the origin: A rotation by $180°$ about the origin is shown. The rule for a rotation by $180°$ about the origin is $\left(x,y\right)\to \left(-x,-y\right)$ . ### Rotation by $270°$ about the origin: A rotation by $270°$ about the origin is shown. The rule for a rotation by $270°$ about the origin is $\left(x,y\right)\to \left(y,-x\right)$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7638500332832336, "perplexity": 203.2892644607068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00553.warc.gz"}
http://www.science.gov/topicpages/q/quantum+recoil+effects.html
Note: This page contains sample records for the topic quantum recoil effects from Science.gov. While these samples are representative of the content of Science.gov, they are not comprehensive nor are they the most current set. We encourage you to perform a real-time search of Science.gov to obtain the most current and comprehensive results. Last update: August 15, 2014. 1 The free electron laser and collective atomic recoil laser (CARL) are examples of collective recoil lasing, where exponential amplification of a radiation field occurs simultaneously with self-bunching of an ensemble of particles (electrons in the case of the FEL and atoms in the case of the CARL). In this paper, we discuss quantum and propagation effects using a model where the particle dynamics are described quantum-mechanically in terms of a matter-wave field, which evolves self-consistently with the radiation field. The model shows that the scattered radiation evolves superradiantly both in the case where the particle ensemble is short compared to the cooperation length of the system, and where the ensemble is long compared to the cooperation length. In both short and long pulse cases there exist a classical and quantum regime of superradiant emission. For short samples in both quantum and classical regimes the superradiant pulse has a low peak intensity and is said to exhibit 'weak' superradiance. For long pulses in both quantum and classical regimes of evolution, the dynamics at the rear edge of the sample is dominated by propagation. This produces a 'strong' superradiant pulse with much higher peak intensity than that predicted by 'mean-field' or 'steady-state' models in which propagation effects are neglected. Bonifacio, R.; Piovella, N.; Robb, G. R. M.; Cola, M. M. 2005-08-01 2 SciTech Connect It is argued that the inclusion of the Bohm potential in quantum fluid equations is equivalent to inclusion of a nonrelativistic form of the quantum recoil in plasma kinetic theory. The Bohm term is incorrect when applied to waves with phase speed greater than the speed of light. Melrose, D. B.; Mushtaq, A. [School of Physics, University of Sydney, Sydney, New South Wales 2006 (Australia) 2009-09-15 3 Free Electron Laser (FEL) and Collective Atomic Recoil Laser (CARL) are described by the same model of classical equations for properly defined scaled variables. These equations are extended to the quantum domain describing the particle's motion by a Schrödinger equation coupled to a self-consistent radiation field. The model depends on a single collective parameter bar rho which represents the maximum number of photons emitted per particle. We demonstrate that the classical model is recovered in the limit bar rho >> 1, in which the Wigner function associated to the Schrödinger equation obeys to the classical Vlasov equation. On the contrary, for bar rho <= 1, a new quantum regime is obtained in which both FELs and CARLs behave as a two-state system coupled to the self-consistent radiation field and described by Maxwell-Bloch equations. Bonifacio, R.; Cola, M. M.; Piovella, N.; Robb, G. R. M. 2005-01-01 4 We extend the semiclassical model of the collective atomic recoil laser (CARL) to include the quantum mechanical description of the center-of-mass motion of the atoms in a Bose-Einstein condensate (BEC). We show that when the average atomic momentum is less than the recoil momentum ? q?, the CARL equations reduce to the Maxwell-Bloch equations for two momentum levels. In the conservative regime (no radiation losses), the quantum model depends on a single collective parameter, ?, that can be interpreted as the average number of photons scattered per atom in the classical limit. When ??1, the semiclassical CARL regime is recovered, with many momentum levels populated at saturation. On the contrary, when ??1, the average momentum oscillates between zero and ? q?, and a periodic train of 2 ? hyperbolic secant pulses is emitted. In the dissipative regime (large radiation losses) and in a suitable quantum limit, a sequential superfluorescence scattering occurs, in which after each process atoms emit a ? hyperbolic secant pulse and populate a lower momentum state. These results describe the regular arrangement of the momentum pattern observed in recent experiments of superradiant Rayleigh scattering from a BEC. Piovella, N.; Gatelli, M.; Bonifacio, R. 2001-07-01 5 We formulate a wave-atom-optics theory of the collective atomic recoil laser (CARL) where the atomic center-of-mass motion is treated quantum mechanically. By comparing the predictions of this theory with those of the ray-atom-optics theory, which treats the center-of-mass atomic motion classically, we show that for the case of a far off-resonant pump laser the ray-optics model fails to predict the linear response of the CARL when the temperature is of the order of the recoil temperature or less. This is due to the fact that in this temperature regime one can no longer ignore the effects of matter-wave diffraction on the atomic center-of-mass motion. Moore, M. G.; Meystre, P. 1998-10-01 6 SciTech Connect We attempt to resolve a recent dispute regarding the size, as well as the proper formulation, of recoil corrections to baryon magnetic moments in a bag model. It is demonstrated that the overall center-of-the-system (OCS) motion, when factored out properly to yield the momentum-conservation delta-functions, cannot give rise to additional and sizable recoil corrections as addressed by Betz and Goldflam and independently by Guichon. Thus, the only contribution due to baryon recoil comes from the spinor rotation of the constituent quarks. Gattone, A.O.; Hwang, W.P. 1984-11-15 7 PubMed Recoil effects in valence band X-ray photoelectron spectroscopy (XPS) are studied for both abb-trifluorostyrene and styrene molecular crystal systems. The gradual changes of XPS spectra excited by several photon energies are theoretically investigated within the tight-binding approximation and harmonic approximation of lattice vibrations and have been explained in terms of not only atomic mass but also atomic orbital (AO) population. The recoil effect of valence band photoemission strongly depends on the population and partial photoionization cross section (PICS) of AOs as well as the masses of composite atoms. In abb-trifluorostyrene F 2p dominant bands show the recoil shift close to free F atom recoil shift, and C 2s dominant bands show that to free C atom recoil shift, whereas the mixed bands of C and F give rise to the peak asymmetries due to their different recoil shifts. For these systems, hydrogen contribution is negligibly small which is in contrast to our previous results for the crystals composed of small organic molecules. We also discuss some potential uses of the recoil shifts for these systems. PMID:23441983 Shang, Ming-Hui; Fujikawa, Takashi; Ueno, Nobuo 2013-04-01 8 A number of geologically important chronometers are affected by, or owe their utility to, the "recoil effect". This effect describes the physical displacement of a nuclide due to energetic nuclear processes such as radioactive alpha decay (as in the case of various parent-daughter pairs in the uranium-series decay chains, and Sm-Nd), as well as neutron irradiation (in the case of the methodology for the 40Ar/39Ar dating method). The broad range of affected geochronometers means that the recoil effect can impact a wide range of dating method applications in the geosciences, including but not limited to: Earth surface processes, paleoclimate, volcanic processes, and cosmochemistry and planetary evolution. In particular, the recoil effect can have a notable impact on the use of fine grains (silt- and clay-sized particles) for geochronometric dating purposes. This is because recoil-induced loss of a nuclide from the surfaces of a grain can create an isotopically-depleted outer rind, and for small grains, this depleted rind can be volumetrically significant. When this recoil loss is measurable and occurs in a known time-dependent fashion, it can usefully serve as the basis for chronometers (such as the U-series comminution age method); in other cases recoil loss from fine particles creates an unwanted deviation from expected isotope values (such as for the Ar-Ar method). To improve both the accuracy and precision of ages inferred from geochronometric systems that involve the recoil of a key nuclide from small domains, it is necessary to quantify the magnitude of the recoil loss of that particular nuclide. It is also necessary to quantitatively describe the effect of geological processes that can alter the outer surface of grains, and hence the isotopically-depleted rind. Here we present a new mathematical and numerical model that includes two main features that enable enhanced accuracy and precision of ages determined from geochronometers. Since the surface area of the dated grain is a major control on the magnitude of recoil loss, the first feature is the ability to calculate recoil effects on isotopic compositions for realistic, complex grain shapes and surface roughnesses. This is useful because natural grains may have irregular shapes that do not conform to simple geometric descriptions. Perhaps more importantly, the surface area over which recoiled nuclides are lost can be significantly underestimated when grain surface roughness is not accounted for, since the recoil distances can be of similar characteristic lengthscales to surface roughness features. The second key feature is the ability to incorporate dynamical geologic processes affecting grain surfaces in natural settings, such as dissolution and crystallization. We describe the model and its main components, and point out implications for the geologically-relevant chronometers mentioned above. Lee, V. E.; Huber, C. 2012-12-01 9 SciTech Connect Relativistic nuclear recoil effects are studied for antiprotonic and muonic atoms. The generalization of the Breit-Pauli Hamiltonian including vacuum polarization is presented. Previous treatments are corrected, and the result for the 2S{sub 12}-2P{sub 12} splitting in muonic hydrogen is updated. Veitia, Andrzej; Pachucki, Krzysztof [Institute of Theoretical Physics, Warsaw University, Hoz-dota 69, 00-681 Warsaw (Poland) 2004-04-01 10 A collective atomic recoil laser (CARL) realized with a Bose-Einstein condensate offers the possibility to investigate new effects in the coherent interaction between optical and matter waves. This paper discusses some aspects of the nonlinear evolution of scattered radiation and the matter-wave field in the high-Q cavity and superradiant CARL regimes. Piovella, N.; Cola, M.; Bonifacio, R. 2004-06-01 11 We extend the collective atomic recoil lasing (CARL) model including the effects of friction and diffusion forces acting on the atoms due to the presence of optical molasses fields. The results from this model are consistent with those from a recent experiment by Kruse et al. [ Phys. Rev. Lett. 91, 183601 (2003) ]. In particular, we obtain a threshold condition above which collective backscattering occurs. Using a nonlinear analysis we show that the backscattered field and the bunching evolve to a steady state, in contrast to the nonstationary behavior of the standard CARL model. For a proper choice of the parameters, this steady state can be superfluorescent. Robb, G. R.; Piovella, N.; Ferraro, A.; Bonifacio, R.; Courteille, Ph. W.; Zimmermann, C. 2004-04-01 12 We present a theoretical investigation of propagation effects in a collective atomic recoil laser (CARL) operating in the FEL limit. We consider the cases where the system evolves while in free space and while enclosed in a ring cavity. In the case where no cavity is present, we show that the scattered radiation consists of soliton-like superfluorescent pulses. In the case of a 'good' cavity we arrive analytically at a condition to neglect propagation effects. This condition implies that in order to use the so-called mean field approximation, the condition {(? L) }/{l {3}/{2}c}?0, T ? 0 must be satisfied with {(Tl c}/{(? L) {2}/{3}} finite where lc is the cooperation length of the system, T is the transmission coefficient of the mirrors, L and ? are the sample length and cavity length respectively. We confirm the validity of this condition using a numerical analysis and provide a simple physical interpretation. In the mean field limit, we show that if the cavity linewidth is greater than the spectral width of the pulse emitted by the sample, the emission remains superfluorescent and is not sensitive to the presence of the cavity. We also show that in the opposite case the emission is sensitive to the cavity parameters and no longer superfluorescent. Bonifacio, R.; De Salvo, L.; Robb, G. R. M. 1997-02-01 13 Differential cross sections (DCSs) are presented for reactive and inelastic H + D2 collisions over a wide range of collision energies and product quantum states. A mixture of HBr and D2 is expanded into a vacuum chamber; a laser photolyzes HBr to initiate the collision process. Three-dimensional ion imaging is employed to detect HD/D2 products that have been quantum state selected by resonance enhanced multiphoton ionization. The construction of the imaging instrument and a novel application of two-color Doppler-free ionization are described. Reactively scattered HD(v' = 1, j') products are mostly back scattered, and the DCS contains a single peak; the dependence of the DCS on the collision energy over the range 1.48 ? Ecoll ? 1.94 eV is very weak. This behavior is consistent with the direct recoil mechanism that is known to be dominant. For HD( v' = 1, j' = 1, 2) at collision energies Ecoll ? 1.72 eV, a bimodal feature is observed, which may be caused by indirect scattering from the conical intersection. For HD( v' = 3, j = 0), there are three major peaks whose widths and centers vary rapidly with the collision energy. Recent quantum mechanical (QM) calculations attributed this behavior to the interference between nearside and farside pathways. New and existing QM calculations accurately reproduce the measured DCSs for reactive scattering; the experiments presented here corroborate the theoretical predictions to a much higher level of detail compared with previous measurements. Inelastically scattered D2(v' = 1-4, j') products are mostly forward scattered. This observation is contrary to the commonly accepted wisdom that collisions capable of transferring a large amount of energy into vibration occur at low impact parameters and are back scattered. We compare our results with quasi-classical trajectory calculations and suggest that the forward scattering can be explained by a tug-of-war mechanism in which attractive forces dominate the inelastic scattering process. Many inelastic trajectories recross the reaction barrier, and we find evidence of quantum interference effects for D2(v' = 3, j' ? 2) that may be related to those observed for the HD( v' = 3, j' = 0) reactive channel. Goldberg, Noah Tribe 14 SciTech Connect Propagation of waves in nano-sized GaAs semiconductor induced by electron beam are investigated. A dispersion relation is derived by using quantum hydrodynamics equations including the electrons and holes quantum recoil effects, exchange-correlation potentials, and degenerate pressures. It is found that the propagating modes are instable and strongly depend on the electron beam parameters, as well as the quantum recoil effects and degenerate pressures. The instability region shrinks with the increase of the semiconductor number density. The instability arises because of the energetic electron beam produces electron-hole pairs, which do not keep in phase with the electrostatic potential arising from the pair plasma. Yahia, M. E. [Faculty of Engineering, The British University in Egypt (BUE), El-Shorouk City, Cairo (Egypt) [Faculty of Engineering, The British University in Egypt (BUE), El-Shorouk City, Cairo (Egypt); National Institute of Laser Enhanced Sciences (NILES), Cairo University (Egypt); Azzouz, I. M. [National Institute of Laser Enhanced Sciences (NILES), Cairo University (Egypt)] [National Institute of Laser Enhanced Sciences (NILES), Cairo University (Egypt); Moslem, W. M. [Department of Physics, Faculty of Science, Port Said University, Port Said (Egypt)] [Department of Physics, Faculty of Science, Port Said University, Port Said (Egypt) 2013-08-19 15 Propagation of waves in nano-sized GaAs semiconductor induced by electron beam are investigated. A dispersion relation is derived by using quantum hydrodynamics equations including the electrons and holes quantum recoil effects, exchange-correlation potentials, and degenerate pressures. It is found that the propagating modes are instable and strongly depend on the electron beam parameters, as well as the quantum recoil effects and degenerate pressures. The instability region shrinks with the increase of the semiconductor number density. The instability arises because of the energetic electron beam produces electron-hole pairs, which do not keep in phase with the electrostatic potential arising from the pair plasma. Yahia, M. E.; Azzouz, I. M.; Moslem, W. M. 2013-08-01 16 PubMed The relativistic recoil effect has been the object of experimental investigations using highly charged ions at the Heidelberg electron beam ion trap. Its scaling with the nuclear charge Z boosts its contribution to a measurable level in the magnetic-dipole (M1) transitions of B- and Be-like Ar ions. The isotope shifts of 36Ar versus 40Ar have been detected with sub-ppm accuracy, and the recoil effect contribution was extracted from the 1s(2)2s(2)2p 2P(1/2) - 2P(3/2) transition in Ar13+ and the 1s(2)2s2p 3P1-3P2 transition in Ar14+. The experimental isotope shifts of 0.00123(6) nm (Ar13+) and 0.00120(10) nm (Ar14+) are in agreement with our present predictions of 0.00123(5) nm (Ar13+) and 0.00122(5) nm (Ar14+) based on the total relativistic recoil operator, confirming that a thorough understanding of correlated relativistic electron dynamics is necessary even in a region of intermediate nuclear charges. PMID:17025810 Orts, R Soria; Harman, Z; López-Urrutia, J R Crespo; Artemyev, A N; Bruhns, H; Martínez, A J González; Jentschura, U D; Keitel, C H; Lapierre, A; Mironov, V; Shabaev, V M; Tawara, H; Tupitsyn, I I; Ullrich, J; Volotka, A V 2006-09-01 17 SciTech Connect Preferential loss of uranium-234 relative to uranium-238 from rocks into solutions has long been attributed to recoiling alpha-emitting nuclei. Direct evidence has been obtained for two mechanisms, first, recoil ejection from grains, and now release by natural etching of alpha-recoil tracks. The observations have implications for radon emanation and for the storage of alpha-emitting radioactive waste. Fleischer, R.L. 1980-02-29 18 We present a complete quantum mechanical treatment of the exponential instability of CARL. We show that the Glauber P function, in general, is the one that results from a superposition of a coherent probe field and the spontaneous emission chaotic field. In particular, if no probe is present the photon statistics during the exponential growth is that of a chaotic thermal field. Bonifacio, Rodolfo 1998-01-01 19 A series of acid-leaching experiments have been carried out on a sample of uranium ore from reactor zone number 10 of the Oklo mines in Gabon. Anomalously high U-234/U-238 ratios were observed accompanied by modestly increased U-235/U-238 ratios in uranium fractions. These results, which can be interpreted as being due to the alpha-recoil effects of U-238 and Pu-239, provide a convenient way of calculating the conversion factor (the fraction of uranium atoms converted to plutonium) of the natural reactors from radiochemical data, obviating the necessity for mass-spectrometric measurements. Sheng, Z. Z.; Kuroda, P. K. 1984-12-01 20 Misra and Sudarshan pointed out, based on the quantum measurement theory, that repeated measurements lead to a slowing down of the transition, which they called the quantum Zeno effect. Recently, Itano, Heinzen, Bollinger and Wineland have reported that they succeeded in observing that effect. We show that the results of Itano et al. can be recovered through conventional quantum mechanics T. Petrosky; S. Tasaki; I. Prigogine 1990-01-01 21 National Technical Information Service (NTIS) This report results from a contract tasking MIREA Technical University as follows: The contractor will develop an analytical and quantitatively proven treatment of how to incorporate the nuclear recoil phenomenon into the nuclear gamma-ray lasing process,... L. A. Rivlin 1999-01-01 22 SciTech Connect A theoretical model was developed to predict the amount of nucleation that occurs as a result of neutron interactions in superheated liquids. The model utilizes nuclear cross-section data, charged-particle linear energy transfer information, and computations of critical bubble nucleation energy to generate the number of bubbles formed in superheated liquid droplet ('bubble') neutron detectors exposed to neutron fluxes of specified intensity and energy. Previous experimental attempts to relate effective (energy-depositing) ion track length L to critical bubble radius r[sub c] using a dimensionless coefficient were unsuccessful. The formulation of a new coefficient b, equal to the ratio of effective ion track length L to the seed bubble radius r[sub o] is now proposed. By parameterizing the value of b within the model, the least-squares best value of b was determined to be 4.3 for both high-and low-energy [sup 252]Cf neutrons. Thus, the effective recoil ion track length in radiation-induced nucleation can be determined if the seed bubble radius is known. Harper, M.J. (United States Naval Academy, Annapolis, MD (United States)) 1993-06-01 23 SciTech Connect High dose (4 to 7.5 x 10/sup 15/ cm/sup -2/) As implantations into p-type (100) Si have been carried out through a screen-oxide of thicknesses less than or equal to 775A and without screen oxide. The effect of recoiled O on damage annealing and electrical properties of the implanted layers has been investigated using a combination of the following techniques: TEM, RBS/MeV He/sup +/ channeling, SIMS and Hall measurements in conjunction with chemical stripping and sheet resistivity measurements. The TEM results show that there is a dramatically different annealing behavior of the implantation damage for the through oxide implants (Case I) as compared to implants into bare silicon (Case II). Comparison of the structural defect profiles with O distributions obtained by SIMS demonstrated that retardation in the secondary damage growth in Case I can be directly related with the presence of O. Weak-beam TEM showed that a high density of fine defect clusters (less than or equal to 50A) were present both in Case I and Case II. The electrical profiles showed only 30% of the total As to be electrically active. The structural and electrical results have been explained by a model that entails As-O, Si-O and As-As complex formation and their interaction with the dislocations. Sadana, D.K.; Wu, N.R.; Washburn, J.; Current, M.; Morgan, A.; Reed, D.; Maenpaa, M. 1982-10-01 24 NASA Technical Reports Server (NTRS) Olivine and pyroxene are the major ferromagnesian minerals in most meteorite types and in mafic igneous rocks that are dominant at the surface of the Earth. It is probable that they are the major mineralogical components at the surface of any planetary body that has undergone differentiation processes. In situ mineralogical studies of the rocks and soils on Mars suggest that olivine is a widespread mineral on that planet s surface (particularly at the Gusev site) and that it has been relatively unaffected by alteration. Thus an understanding of the characteristics of Mossbauer spectra of olivine is of great importance in interpreting MER results. However, variable temperature Mossbauer spectra of olivine, which are needed to quantify recoil-free fraction effects and to understand the temperature dependence of olivine spectra, are lacking in the literature. Thus, we present here a study of the temperature dependence and recoil-free fraction of a series of synthetic olivines. Sklute, E. C.; Rothstein, Y.; Dyar, M. D.; Schaefer, M. W.; Menzies, O. N.; Bland, P. A.; Berry, F. J. 2005-01-01 25 Displacement of the daughter isotope by a-recoil results in an open system on the nanoscale. For a heterogeneous distribution of U and Th, this redistribution of intermediate and stable daughter isotopes results in subvolumes with a deficit of Pb and others with an excess of Pb. Whether such heterogeneities affect the analyzed U–Pb system depends on: (1) the volume of Rolf L. Romer 2003-01-01 26 We outline the basic physics of CARL in the cold and warm beam limits showing that recoil gain and self bunching can occur with very different features and intensities depending on the relevance of the velocity spead. In the cold beam limit we find the well known high gain FEL regime, whereas in the warm beam limit one has a small gain regime described by the derivative of the thermal velocity distribution. Bonifacio, R.; Verkerk, P. 1996-02-01 27 Recent numerical relativity simulations have shown that the final black hole produced in a binary merger can recoil with a velocity as large as 5000km/s. Because of enhanced gravitational-wave emission in the so-called “hang-up” configurations, this maximum recoil occurs when the black-hole spins are partially aligned with the orbital angular momentum. We revisit our previous statistical analysis of post-Newtonian evolutions of black-hole binaries in the light of these new findings. We demonstrate that despite these new configurations with enhanced recoil velocities, spin alignment during the post-Newtonian stage of the inspiral will still significantly suppress (or enhance) kick magnitudes when the initial spin of the more massive black hole is more (or less) closely aligned with the orbital angular momentum than that of the smaller hole. We present a preliminary study of how this post-Newtonian spin alignment affects the ejection probabilities of supermassive black holes from their host galaxies with astrophysically motivated mass ratio and initial spin distributions. We find that spin alignment suppresses (enhances) ejection probabilities by ˜40% (20%) for an observationally motivated mass-dependent galactic escape velocity, and by an even greater amount for a constant escape velocity of 1000km/s. Kick suppression is thus at least a factor two more efficient than enhancement. Berti, Emanuele; Kesden, Michael; Sperhake, Ulrich 2012-06-01 28 PubMed Central The scattering of a single photon with sufficiently high energy can cause a recoil of a motional scatterer. We study its backaction on the photon's coherent transport in one dimension by modeling the motional scatterer as a two-level system, which is trapped in a harmonic potential. While the reflection spectrum is of a single peak in the Lamb-Dicke limit, multi-peaks due to phonon excitations can be observed in the reflection spectrum as the trap becomes looser or the mass of the two-level system becomes smaller. Li, Qiong; Xu, D. Z.; Cai, C. Y.; Sun, C. P. 2013-01-01 29 The idea that quantum-mechanical phenomena can play nontrivial roles in biology has fascinated researchers for a century. Here we review some examples of such effects, including light-harvesting in photosynthesis, vision, electron- and proton-tunneling, olfactory sensing, and magnetoreception. We examine how experimental tests have aided this field in recent years and discuss the importance of developing new experimental probes for future Graham R. Fleming; Gregory D. Scholes; Yuan-Chung Cheng 2011-01-01 30 The behavior displayed by a quantum system when it is perturbed by a series of von Neumann measurements along time is analyzed. Because of the similarity between this general process with giving a deck of playing cards a shuffle, here it is referred to as quantum shuffling, showing that the quantum Zeno and anti-Zeno effects emerge naturally as two time limits. Within this framework, a connection between the gradual transition from anti-Zeno to Zeno behavior and the appearance of an underlying Markovian dynamics is found. Accordingly, although a priori it might result counterintuitive, the quantum Zeno effect corresponds to a dynamical regime where any trace of knowledge on how the unperturbed system should evolve initially is wiped out (very rapid shuffling). This would explain why the system apparently does not evolve or decay for a relatively long time, although it eventually undergoes an exponential decay. By means of a simple working model, conditions characterizing the shuffling dynamics have been determined, which can be of help to understand and to devise quantum control mechanisms in a number of processes from the atomic, molecular and optical physics. Sanz, A. S.; Sanz-Sanz, C.; González-Lezana, T.; Roncero, O.; Miret-Artés, S. 2012-04-01 31 An effective formalism for quantum constrained systems is presented which allows manageable derivations of solutions and observables, including a treatment of physical reality conditions without requiring full knowledge of the physical inner product. Instead of a state equation from a constraint operator, an infinite system of constraint functions on the quantum phase space of expectation values and moments of states Martin Bojowald; Barbara Sandhöfer; Aureliano Skirzewski; Artur Tsobanjan 2009-01-01 32 PubMed A systematic shift of the photon recoil momentum due to the index of refraction of a dilute gas of atoms has been observed. The recoil frequency was determined with a two-pulse light grating interferometer using near-resonant laser light. The results show that the recoil momentum of atoms caused by the absorption of a photon is n variant Planck's k, where n is the index of refraction of the gas and k is the vacuum wave vector of the photon. This systematic effect must be accounted for in high-precision atom interferometry with light gratings. PMID:15904272 Campbell, Gretchen K; Leanhardt, Aaron E; Mun, Jongchul; Boyd, Micah; Streed, Erik W; Ketterle, Wolfgang; Pritchard, David E 2005-05-01 33 SciTech Connect A systematic shift of the photon recoil momentum due to the index of refraction of a dilute gas of atoms has been observed. The recoil frequency was determined with a two-pulse light grating interferometer using near-resonant laser light. The results show that the recoil momentum of atoms caused by the absorption of a photon is n({Dirac_h}/2{pi})k, where n is the index of refraction of the gas and k is the vacuum wave vector of the photon. This systematic effect must be accounted for in high-precision atom interferometry with light gratings. Campbell, Gretchen K.; Leanhardt, Aaron E.; Mun, Jongchul; Boyd, Micah; Streed, Erik W.; Ketterle, Wolfgang; Pritchard, David E. [MIT-Harvard Center for Ultracold Atoms, Research Laboratory of Electronics and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States) 2005-05-06 34 We consider a model of a spin system under the influence of decoherence such that a system coupled with a dissipating environmental system consisting of either spins or bosonic modes. The dissipation of an environment is governed by a certain probability with which an environmental system localized around a principal system dissipates into a larger bath and a thermal environmental system instead migrates into the place. A certain threshold on the probability is found in the growth of decoherence in a principal system. A larger as well as a smaller dissipation probability than the threshold results in smaller decoherence. This finding is utilized to elucidate a spin relaxation theory of a magnetic resonance spectrometer. In particular, a seamless description of transverse relaxation and motional narrowing is possible. We also numerically evaluate the dynamics of coherence useful for quantum information processing. The bang-bang control and anti-Zeno effect in entanglement and the Oppenheim-Horodecki nonclassical correlation are investigated in the model of spin-boson coupling. Saitoh, Akira; Rahimi, Robabeh; Nakahara, Mikio 2010-11-01 35 SciTech Connect The movement of high-specific-activity radioactive particles (i.e., alpha recoil) has been observed and studied since the early 1900s. These studies have been motivated by concerns about containment of radioactivity and the protection of human health. Additionally, studies have investigated the potential advantage of alpha recoil to effect separations of various isotopes. This report provides a review of the observations and results of a number of the studies. Icenhour, A.S. 2005-05-19 36 The movement of high-specific-activity radioactive particles (i.e., alpha recoil) has been observed and studied since the early 1900s. These studies have been motivated by concerns about containment of radioactivity and the protection of human health. Additionally, studies have investigated the potential advantage of alpha recoil to effect separations of various isotopes. This report provides a review of the observations and A. S. Icenhour 2005-01-01 37 The Recoil Shadow Anisotropy Method (RSAM) is a new experimental method for identifying isomers in the nanosecond range and measuring their half-lives. This method can be applied to experiments performed with thin targets and ?-ray multidetector arrays including collimated composite detectors and does not require any additional device. It uses the shadow effect imposed by the collimators on the different elements of composite detectors for ?-rays emitted by recoiling nuclei. RSAM was developed for the clover detectors of the Eurogam-2 array and tested using several data sets obtained with this array. A number of known isomers with half-lives lying between 0.9 and 18 ns in 194Hg, 191Au, 148Gd, 149Gd, 193Pb and 194Pb have been successfully re-measured, proving the ability of RSAM for lifetime measurements. Gueorguieva, E.; Kaci, M.; Schück, C.; Minkova, A.; Vieu, Ch.; Correia, J. J.; Dionisio, J. S. 2001-12-01 38 We demonstrate effective equilibration for unitary quantum dynamics under conditions of classical chaos. Focusing on the paradigmatic example of the Dicke model, we show how a constructive description of the thermalization process is facilitated by the Glauber Q or Husimi function, for which the evolution equation turns out to be of Fokker-Planck type. The equation describes a competition of classical drift and quantum diffusion in contractive and expansive directions. By this mechanism the system follows a “quantum smoothened” approach to equilibrium, which avoids the notorious singularities inherent to classical chaotic flows. Altland, Alexander; Haake, Fritz 2012-02-01 39 PubMed We demonstrate effective equilibration for unitary quantum dynamics under conditions of classical chaos. Focusing on the paradigmatic example of the Dicke model, we show how a constructive description of the thermalization process is facilitated by the Glauber Q or Husimi function, for which the evolution equation turns out to be of Fokker-Planck type. The equation describes a competition of classical drift and quantum diffusion in contractive and expansive directions. By this mechanism the system follows a "quantum smoothened" approach to equilibrium, which avoids the notorious singularities inherent to classical chaotic flows. PMID:22401203 Altland, Alexander; Haake, Fritz 2012-02-17 40 For the final running period of HERA, a recoil detector was installed at the HERMES experiment to improve measurements of hard exclusive processes in charged-lepton nucleon scattering. Here, deeply virtual Compton scattering is of particular interest as this process provides constraints on generalised parton distributions that give access to the total angular momenta of quarks within the nucleon. The HERMES recoil detector was designed to improve the selection of exclusive events by a direct measurement of the four-momentum of the recoiling particle. It consisted of three components: two layers of double-sided silicon strip sensors inside the HERA beam vacuum, a two-barrel scintillating fibre tracker, and a photon detector. All sub-detectors were located inside a solenoidal magnetic field with a field strength of 1T. The recoil detector was installed in late 2005. After the commissioning of all components was finished in September 2006, it operated stably until the end of data taking at HERA end of June 2007. The present paper gives a brief overview of the physics processes of interest and the general detector design. The recoil detector components, their calibration, the momentum reconstruction of charged particles, and the event selection are described in detail. The paper closes with a summary of the performance of the detection system. Airapetian, A.; Aschenauer, E. C.; Belostotski, S.; Borisenko, A.; Bowles, J.; Brodski, I.; Bryzgalov, V.; Burns, J.; Capitani, G. P.; Carassiti, V.; Ciullo, G.; Clarkson, A.; Contalbrigo, M.; De Leo, R.; De Sanctis, E.; Diefenthaler, M.; Di Nezza, P.; Düren, M.; Ehrenfried, M.; Guler, H.; Gregor, I. M.; Hartig, M.; Hill, G.; Hoek, M.; Holler, Y.; Hristova, I.; Jo, H. S.; Kaiser, R.; Keri, T.; Kisselev, A.; Krause, B.; Krauss, B.; Lagamba, L.; Lehmann, I.; Lenisa, P.; Lu, S.; Lu, X.-G.; Lumsden, S.; Mahon, D.; Martinez de la Ossa, A.; Murray, M.; Mussgiller, A.; Nowak, W.-D.; Naryshkin, Y.; Osborne, A.; Pappalardo, L. L.; Perez-Benito, R.; Petrov, A.; Pickert, N.; Prahl, V.; Protopopescu, D.; Reinecke, M.; Riedl, C.; Rith, K.; Rosner, G.; Rubacek, L.; Ryckbosch, D.; Salomatin, Y.; Schnell, G.; Seitz, B.; Shearer, C.; Shutov, V.; Statera, M.; Steijger, J. J. M.; Stenzel, H.; Stewart, J.; Stinzing, F.; Trzcinski, A.; Tytgat, M.; Vandenbroucke, A.; Van Haarlem, Y.; Van Hulse, C.; Varanda, M.; Veretennikov, D.; Vilardi, I.; Vikhrov, V.; Vogel, C.; Yaschenko, S.; Ye, Z.; Yu, W.; Zeiler, D.; Zihlmann, B. 2013-05-01 41 SciTech Connect Within the change of self-consistent field approximation, x-ray spectra can be considerably richer in many-electron phenomena than once suspected. With the finite number of electrons method, these spectra can be evaluated for realistic electron-hole interactions in free electron metals. Preliminary results indicate that metals with band structure can also be treated this way. However, theories of final-state interactions in metals await the reliable determinations of the screened potential of a core hole in a metal and realistic avaluation of the effects of electron-electron interactions. (GHT) Dow, J. D.; Swarts, C. A.; Bowen, M. A.; Mehreteab, E.; Satpathy, S. S. 1980-01-01 42 Quantum annealing method has been widely attracted attention in statistical physics and information science since it is expected to be a powerful method to obtain the best solution of optimization problem as well as simulated annealing. The quantum annealing method was incubated in quantum statistical physics. This is an alternative method of the simulated annealing which is well-adopted for many optimization problems. In the simulated annealing, we obtain a solution of optimization problem by decreasing temperature (thermal fluctuation) gradually. In the quantum annealing, in contrast, we decrease quantum field (quantum fluctuation) gradually and obtain a solution. In this paper we review how to implement quantum annealing and show some quantum fluctuation effects in frustrated Ising spin systems. Tanaka, Shu; Tamura, Ryo 2013-09-01 43 SciTech Connect Dimensionality is an important factor to govern the electronic structures of semiconductor nanocrystals. The quantum confinement energies in one-dimensional quantum wires and zero-dimensional quantum dots are quite different. Using large-scale first-principles calculations, we systematically study the electronic structures of semiconductor (including group IV, III-V, and II-VI) surface-passivated quantum wires and dots. The band-gap energies of quantum wires and dots have the same scaling with diameter for a given material. The ratio of band-gap-increases between quantum wires and dots is material-dependent, and slightly deviates from 0.586 predicted by effective-mass approximation. Highly linear polarization of photoluminescence in quantum wires is found. The degree of polarization decreases with the increasing temperature and size. Li, Jingbo; Wang, Lin-Wang 2004-03-30 44 PubMed Central The radiochemical dipyrrolidinedithiocarbamato-212Pb(II) [212Pb(PDC)2] is synthesized and its effects on colony formation in cultured Chinese hamster V79 cells are investigated. The cellular uptake, biological retention, subcellular distribution and cytotoxicity of the radiocompound are determined. The 212Pb is taken up quickly by the cells, reaching saturation levels in 1.25 h. When the cells are washed, the intracellular activity is retained with a biological half-life of 11.6 h. Gamma-ray spectroscopy indicates that the 212Pb daughters (212Bi, 212Po and 208Tl) are in secular equilibrium within the cell. About 72% of the cellular activity localizes in the cell nucleus, of which 35% is bound specifically to nuclear DNA. The mean cellular uptake required to achieve 37% survival is 0.35 mBq of 212Pb per cell, which delivers a dose of 1.0 Gy to the cell nucleus when the recoil energy of 212Bi and 212Po decays is ignored and 1.7 Gy when recoil is included. The corresponding RBE values compared to acute external 137Cs ? rays at 37% survival are 4.0 and 2.3, respectively. The chemical Pb(PDC)2 is not chemotoxic at the concentrations used in this study. Because the ?-particle emitter 212Pb decays to the ?-particle-emitting daughters 212Bi and 212Po, these studies provide information on the biological effects of ?-particle decays that occur in the cell nucleus. Our earlier studies with cells of the same cell line using 210Po (emits 5.3 MeV ? particle) localized predominantly in the cytoplasm resulted in an RBE of 6. These earlier results for 210Po, along with the present results for 212Pb, suggest that the recoil energy associated with the 212Bi and 212Po daughter nuclei plays little or no role in imparting biological damage to critical targets in the cell nucleus. Azure, Michael T.; Archer, Ronald D.; Sastry, Kandula S. R.; Rao, Dandamudi V.; Howell, Roger W. 2012-01-01 45 Correlation effects in the quantum crystals He3 and He4 are studied in detail. The single-particle wave function is obtained in the harmonic effective-field approximation; the parameters of the harmonic-oscillator potential are determined self-consistently from the two-body correlation function and the bare interatomic potential. We determine the two-body correlation function by solving numerically an equation derived by decoupling the three-body correlation C. Ebner; C. C. Sung 1971-01-01 46 We perform a study of a collective atomic recoil laser (CARL) that goes beyond the initial growth period. The study is based on a theory that treats both internal and external degrees of atomic freedom quantum mechanically but regards the laser light as a classical field obeying Maxwell's equations. We introduce the concepts of momentum families and diffraction groups and organize the matter wave equations in terms of diffraction groups. The steady-state lasing conditions are discussed in connection with the probe gain in the recoil-induced resonances. The nontrivial steady states and the linear stability analysis of the steady states are both carried out by the method of two-dimensional continued fractions. Both stable and unstable nontrivial steady states are calculated and discussed in the context of regarding the CARL as multiwave mixing involving many modes of matter waves and two optical fields. Ling, H. Y.; Pu, H.; Baksmaty, L.; Bigelow, N. P. 2001-05-01 47 A path-integral Car-Parrinello molecular dynamics simulation of liquid water and ice is performed. It is found that the inclusion of nuclear quantum effects systematically improves the agreement of first-principles simulations of liquid water with experiment. In addition, the proton momentum distribution is computed utilizing a recently developed open path-integral molecular dynamics methodology. It is shown that these results are in good agreement with experimental data. Morrone, Joseph A.; Car, Roberto 2008-07-01 48 Experiments shown here reveal inflection points of the Hall resistivity at half-integer filling factors 5/2 and 7/2 which become more pronounced with increasing current and finally lead to half-integer plateau like structures. These features contradict the edge-state picture of the quantum Hall effect (QHE) and also the disorder picture of the QHE, which cannot explain a gap directly in the middle of a Landau level. We present a novel approach to the quantum Hall effect, which allows us to calculate the electronic transport in a highly non-uniform Hall field, which is present in two opposite corners of a Hall bar, the hot-spots. Precisely in one corner electrons are injected into the device and we derive the local density of states there. We obtain a self-consistent equation for the current-voltage relation through the Ohmic contact and thus a computable theory of the quantum Hall effect, which predicts a unique modulation and splitting of Landau levels caused by the presence of a high electric field exactly in line with the experimental observations. Kramer, Tobias; Heller, E. J.; Parrott, R. E.; Liang, C.-T.; Huang, C. F.; Chen, K. Y.; Lin, L.-H.; Wu, J.-Y.; Lin, S.-D. 2009-03-01 49 General relativity promotes space-time to a physical, dynamical object subject to equations of motion. Quantum gravity, accordingly, must provide a quantum framework for space-time, applicable on the smallest distance scales. Just like generic states in quantum mechanics, quantum space-time structures may be highly counter-intuitive. But if low-energy effects can be extracted, they shed considerable light on the implications to be Martin Bojowald 2010-01-01 50 National Technical Information Service (NTIS) A principal challenge faced by the U.S. Army TACOM-ARDEC Benet Laboratories in the design of armaments for lightweight future fighting vehicles with lethality overmatch is mitigating the deleterious effects of large caliber cannon recoil. The sonic RArefa... E. Kathe R. Dillon 2002-01-01 51 For the main quantum interference term of coherent electronic transport, we study the effect of temperature, perpendicular and/or parallel magnetic fields, spin-orbit coupling and tunneling rates in both metallic grains and mesoscopic heterostructures. We show that the Zeeman effects determines a crucial way to characterize the quantum interference phenomena of the noise for anisotropic systems (mesoscopic heterostructures), qualitatively distinct from those observed in isotropic structures (metallic grains). Ramos, J. G. G. S.; Barbosa, A. L. R.; Hussein, M. S. 2014-06-01 52 The aim of our investigation is focused on studying the effect of dopant dose loss during annealing treatments on heavily doped surface layers, obtained by recoil implantation of antimony in silicon. We are interested particularly by the increase of sheet resistance consequently to the shallow junctions obtained at the surface of substrate and the contribution of the dopant dose loss phenomenon following the high concentration of impurities at the surface. In this work, we report some quantitative data concerning the dopant loss at the surface of silicon implanted and its dependence with annealing treatments. Electrical measurements associated with Rutherford backscattering (RBS) technical analysis showed interesting values of sheet resistance compared with classical ion implantation and despite dopant dose loss phenomenon. Mesli, M. N.; Benbahi, B.; Bouafia, H.; Belmekki, M.; Abidri, B.; Hiadsi, S. 2013-08-01 53 The quantum gravity may have strong consequence for neutrino oscillation phenemomenon over a large distance.We found a significant modification of neutrino oscillation due to quantum gravity effects. Quantum gravity (Planck scale effects) leads to an effective S U(2) L ×U(1) invariant dimension-5 Lagrangian involving, neutrino and Higgs fields. On symmetry breaking, this operator gives rise to correction to the neutrino masses and mixing. The gravitational interaction (M X =M p l ) demands that the element of this perturbation matrix should be independent of flavor indices. In this paper, we study the quantum gravity effects on neutrino oscillation, namely modified dispersion relation for neutrino oscillations parameter. Koranga, Bipin Singh; Narayan, Mohan 2014-05-01 54 Present work is an attempt to find the influence of quantum effects on the Stimulated Brillouin Scattering in semiconductor plasmas using quantum hydrodynamic model. Third-order Brillouin susceptibility arising due to induced nonlinear current density in an n-type semiconductor crystal has been determined using coupled mode analysis. Effect of Bohm potential on the Brillouin gain coefficient is studied through the quantum corrections in classical hydrodynamic equations. It is found that the Bohm potential in the electron dynamics enhances the Brillouin gain. Reduction in the threshold pump intensity of the said process has been realized as a consequence of inclusion of quantum correction term. Vanshpal, Ravi; Dubey, Swati; Ghosh, S. 2013-06-01 55 SciTech Connect The role of local fields in quantum electrodynamics of isolated quantum dot (QD) has been analyzed. The system is modeled as a strongly confined in space two-level quantum oscillator illuminated by quantum light. Relation between local and acting fields in QD has been derived in the dipole approximation from the integral Maxwell equations for electromagnetic field operators. A formalism of the electromagnetic field quantization in electrically small scatterers has been developed. As a result, Hamiltonian of the system has been formulated in terms of the acting field with a separate term responsible for the effect of depolarization. Schroedinger equation with that Hamiltonian has been solved in linear approximation. Interaction of QD with different quantum states of light, such as Fock states, coherent states, Fock qubits, entangled states, has been analyzed. It has been shown that the local-fields induce a fine structure of the QD absorption (emission) spectrum: instead of a single line with the frequency corresponding to the exciton transition, a doublet appears with one component shifted to the blue (red). The value of the shift depends only on the geometrical and electronic properties of QD while the intensities of components are completely determined by the quantum light statistics. It has been demonstrated that in the limiting cases of classical light and single-photon state the doublet is reduced to a singlet shifted in the former case and unshifted in the latter one. A physical interpretation of the predicted effect has been proposed. Possible ways of experimental observation of the effect has been discussed together with the potentiality of its utilization in the quantum information processing. Slepyan, G.Ya.; Maksimenko, S.A.; Hoffmann, A.; Bimberg, D. [Institute of Nuclear Problems, Belarus State University, Bobruiskaya 11, 220050 Minsk (Belarus); Institut fuer Festkoerperphysik, Technische Universitaet Berlin, Hardenbergstrasse 36, 10623 Berlin (Germany) 2002-12-01 56 SciTech Connect Photon Landau damping of electron plasma waves with relativistic phase velocity is described, using a photon kinetic theory where photon recoil is taken into account. An exact form of the wave kinetic equation is used. Kinetic and fluid regimes of photon beam instabilities are discussed. Diffusion in the photon momentum space is derived and a quasilinear wave kinetic equation is established. In the present approach, photon recoil effects associated with the emission or absorption of plasmons are included. The neglect of recoil, which is equivalent to using the geometric optics approximation, reduces the present results to those already existing in the literature. Mendonca, J. T.; Serbeto, A. [CFP and CFIF, Instituto Superior Tecnico, Av. Rovisco Pais 1, 1049-001 Lisbon (Portugal); Instituto de Fisica, Universidade Federal Fulminense, BR-24210-340 Niteroi, RJ (Brazil) 2006-10-15 57 In the dispersive regime of circuit quantum electrodynamics (QED), where the qubit and resonator frequencies differ slightly, photons in the resonator exhibit induced frequency and phase shifts. The qubit-state dependent phase shift is usually measured by monitoring the resonator transmission spectrum at fixed qubit-resonator detuning. In this static scheme, the phase shift can only be monitored in the far-detuned, linear dispersion regime, in order to avoid measurement-induced demolition of the quantum state. By using a dynamic procedure to adiabatically drive the qubit frequency, here we are able to explore the dispersive interaction over a much broader range, and we further monitor the interaction using resonator Wigner tomography. Exotic non-linear effects on different photon states, e.g., Fock states, coherent states and Schrodinger cat states, are thereby directly revealed. Correspondingly, we demonstrate a quantum Kerr effect in the dynamic framework in circuit QED. Yin, Yi; Wang, Haohua; Mariantoni, Matteo; Barends, Rami; Bialczak, Radoslaw C.; Chen, Yu; Lenander, Mike; Kelly, Julian; Lucero, Erik; Megrant, Anthony; O'Malley, Peter; Sank, Daniel; Wenner, Jim; White, Ted; Cleland, Andrew; Martinis, John 2012-02-01 58 A cloud of ultra-cold atoms is loaded into the attractive potential of a light wave that is generated by two counter-propagating modes of a high-finesse ring resonator. The two modes are coupled by the atoms due to coherent Rayleigh scattering and generate a potential which acts back on the motion of the atoms. This feedback leads to a new frequency component and can be described in terms of the long time proposed collective atomic recoil laser (CARL). This model is investigated experimentally and extended by introducing an optical friction force acting on the atoms. This allows for steady state operation of the CARL. Furthermore, it leads to a threshold behaviour of the CARL that translates into a novel type of phase transition: while passing the threshold the initially homogeneous atomic distribution is bunched in space and velocity. With this behaviour the system turns out to acquire some of the main features of the so-called Kuramoto model which provides a very general description of a network of limit cycle oscillators. Zimmermann, Claus; Kruse, Dietmar; von Cube, Christoph; Slama, Sebastian; Deh, Benjamin; Courteille, Philippe 2004-06-01 59 SciTech Connect Nuclear transmutation reactions are based on the absorption of a smaller particle as neutron, proton, deuteron, alpha, etc. The resulting compound nucleus gets out of its initial lattice mainly by taking the recoil, also with help from its sudden change in chemical properties. The recoil implantation is used in correlation with thin and ultra thin materials mainly for producing radiopharmaceuticals and ultra-thin layer radioactive tracers. In nuclear reactors, the use of nano-particulate pellets could facilitate the recoil implantation for breeding, transmutation and partitioning purposes. Using enriched {sup 238}U or {sup 232}Th leads to {sup 239}Pu and {sup 233}U production while using other actinides as {sup 240}Pu, {sup 241}Am etc. leads to actinide burning. When such a lattice is immersed into a radiation resistant fluid (water, methanol, etc.), the recoiled product is transferred into the flowing fluid and removed from the hot area using a concentrator/purifier, preventing the occurrence of secondary transmutation reactions. The simulation of nuclear collision and energy transfer shows that the impacted nucleus recoils in the interstitial space creating a defect or lives small lattices. The defect diffuses, and if no recombination occurs it stops at the lattices boundaries. The nano-grains are coated in thin layer to get a hydrophilic shell to be washed by the collection liquid the particle is immersed in. The efficiency of collection depends on particle magnitude and nuclear reaction channel parameters. For {sup 239}Pu the direct recoil extraction rate is about 70% for {sup 238}UO{sub 2} grains of 5 nm diameters and is brought up to 95% by diffusion due to {sup 239}Neptunium incompatibility with Uranium dioxide lattice. Particles of 5 nm are hard to produce so a structure using particles of 100 nm have been tested. The particles were obtained by plasma sputtering in oxygen atmosphere. A novel effect as nano-cluster radiation damage robustness and cluster amplified defects rejection will be discussed. The advantage of the method and device is its ability of producing small amount of isotopic materials easy to separate, using the nuclear reactors, with higher yield than the accelerator based methods and requiring less chemistry. (author) Popa-Simil, Liviu [R and D, LAVM LLC., Los Alamos, NM, 87544 (United States) 2008-07-01 60 In this work, a path integral Car-Parrinello molecular dynamicsootnotetextCPMD V3.11 Copyright IBM Corp 1990-2006, Copyright MPI fuer Festkoerperforschung Stuttgart 1997-2001. simulation of liquid water is performed. It is found that the inclusion of nuclear quantum effects systematically improves the agreement of first-principles simulations of liquid water with experiment. In addition, the proton momentum distribution is computed utilizing a recently developed open'' path integral molecular dynamics methodologyootnotetextJ.A. Morrone, V. Srinivasan, D. Sebastiani, R. Car J. Chem. Phys. 126 234504 (2007).. It is shown that these results, which are consistent with our computations of the liquid structure, are in good agreement with neutron Compton scattering dataootnotetextG.F. Reiter, J.C. Li, J. Mayers, T. Abdul-Redah, P. Platzman Braz. J. Phys. 34 142 (2004).. The remaining discrepancies between experiment and the present results are indicative of some degree of over-binding in the hydrogen bond network, likely engendered by the use of semi-local approximations to density functional theory in order to describe the electronic structure. Morrone, Joseph; Car, Roberto 2008-03-01 61 We explore the symmetry reduced form of a non-perturbative solution to the constraints of quantum gravity corresponding to quantum de Sitter space. The system has a remarkably precise analogy with the non-relativistic formulation of a particle falling in a constant gravitational field that we exploit in our analysis. We find that the solution reduces to de Sitter space in the semi-classical limit, but the uniquely quantum features of the solution have peculiar property. Namely, the unambiguous quantum structures are neither of Planck scale nor of cosmological scale. Instead, we find a periodicity in the volume of the universe whose period, using the observed value of the cosmological constant, is on the order of the volume of the proton. Randono, Andrew 2010-08-01 62 National Technical Information Service (NTIS) The basic facilities for the thin film compound semiconductor program have been developed. Results were obtained in the areas of transport properties of bismuth and indium antimonide films, the theory of quantum size effects, insulating barriers with InSb... J. R. Sites, C. W. Wilmsen 1974-01-01 63 National Technical Information Service (NTIS) A theory of the Fractional Quantum Hall Effect is constructed based on magnetic flux fractionization, which lead to instability of the system against selfcompression. A theorem is proved stating that arbitrary potentials fail to lift a specific degeneracy... 1985-01-01 64 The dwell time for dissipative quantum system is shown to increase with barrier width. It clearly precludes Hartman effect for dissipative systems. Here calculation has been done for inverted parabolic potential barrier. 2013-05-01 65 SciTech Connect Determining the physical Hilbert space is often considered the most difficult but crucial part of completing the quantization of a constrained system. In such a situation it can be more economical to use effective constraint methods, which are extended here to relativistic systems as they arise for instance in quantum cosmology. By sidestepping explicit constructions of states, such tools allow one to arrive much more feasibly at results for physical observables at least in semiclassical regimes. Several questions discussed recently regarding effective equations and state properties in quantum cosmology, including the spreading of states and quantum backreaction, are addressed by the examples studied here. Bojowald, Martin; Tsobanjan, Artur [Institute for Gravitation and the Cosmos, Pennsylvania State University, 104 Davey Lab, University Park, Pennsylvania 16802 (United States) 2009-12-15 66 The effects in a quantum-mechanical system form a partial algebra and a partially ordered set which is the prototypical example of the effect algebras discussed in this paper. The relationships among effect algebras and such structures as orthoalgebras and orthomodular posets are investigated, as are morphisms and group- valued measures (or charges) on effect algebras. It is proved that there D. J. Foulis; M. K. Bennett 1994-01-01 67 The magnetoelectric response and its quantum relaxation phenomenon have been investigated for a single crystal of yttrium iron garnet. The electric-dipole moments, built in by excess localized electrons forming Fe2+ sites, never freeze even at the lowest temperature and relax through a quantum tunneling process. Application of magnetic field enhances the dielectric relaxation strength and gives rise to a large magnetocapacitance effect ( ˜13% at 10 K with 0.5 T). We show that this magnetically tunable quantum paraelectricity is associated with the Fe2+ -based magnetoelectric centers in which the electric polarization depends on the magnetization vector via the spin-orbit coupling. Yamasaki, Yuichi; Kohara, Yuki; Tokura, Yoshinori 2009-10-01 68 PubMed Decoherence of quantum objects in noisy environments is important in quantum sciences and technologies. It is generally believed that different processes coupled to the same noise source have similar decoherence behaviors and stronger noises cause faster decoherence. Here we show that in a quantum bath, the case can be the opposite. We predict that the multitransition of a nitrogen-vacancy center spin-1 in diamond can have longer coherence time than the single transitions, even though the former suffers twice stronger noises from the nuclear spin bath than the latter. This anomalous decoherence effect is due to manipulation of the bath evolution via flips of the center spin. PMID:21699338 Zhao, Nan; Wang, Zhen-Yu; Liu, Ren-Bao 2011-05-27 69 We present evidence that quantum Zeno effect, otherwise working only for microscopic systems, may also work for large black holes (BH's). The expectation that a BH geometry should behave classically at time intervals larger than the Planck time tPl indicates that the quantum process of measurement of classical degrees of freedom takes time of the order of tPl. Since BH has only a few classical degrees of freedom, such a fast measurement makes a macroscopic BH strongly susceptible to the quantum Zeno effect, which repeatedly collapses the quantum state to the initial one, the state before the creation of Hawking quanta. By this mechanism, Hawking radiation from a BH of mass M is strongly suppressed by a factor of the order of mPl/M. Nikoli?, Hrvoje 2014-06-01 70 SciTech Connect The coherent evolution of two qubits mediated by a set of bosonic field modes is investigated. By assuming a specific asymmetric encoding of the quantum states in the internal levels of the qubits, we show that entangling quantum gates can be realized, with high fidelity, even when a large number of mediating modes is involved. The effect of losses and imperfections on the gates' operation is also considered in detail. Ye Saiyun [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Department of Physics, Fuzhou University, Fuzhou 350002 (China); Yang Zhenbiao; Zheng Shibiao [Department of Physics, Fuzhou University, Fuzhou 350002 (China); Serafini, Alessio [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom) 2010-07-15 71 SciTech Connect In this paper, we consider realistic model of inflation embedded in the framework of loop quantum cosmology. Phase of inflation is preceded here by the phase of a quantum bounce. We show how parameters of inflation depend on the initial conditions established in the contracting, prebounce phase. Our investigations indicate that phase of the bounce easily sets proper initial conditions for the inflation. Subsequently, we study observational effects that might arise due to the quantum gravitational modifications. We perform preliminary observational constraints for the Barbero-Immirzi parameter {gamma}, critical density {rho}{sub c}, and parameter {lambda}. In the next step, we study effects on power spectrum of perturbations. We calculate spectrum of perturbations from the bounce and from the joined bounce+inflation phase. Based on these studies, we indicate a possible way to relate quantum cosmological models with the astronomical observations. Using the Sachs-Wolfe approximation, we calculate the spectrum of the superhorizontal CMB anisotropies. We show that quantum cosmological effects can, in the natural way, explain suppression of the low CMB multipoles. We show that fine-tuning is not required here, and the model is consistent with observations. We also analyze other possible probes of the quantum cosmologies and discuss perspectives of their implementation. Mielczarek, Jakub [Astronomical Observatory, Jagiellonian University, 30-244 Cracow, Orla 171 (Poland) and Laboratoire de Physique Subatomique et de Cosmologie, 53, avenue des Martyrs, 38026 Grenoble cedex (France) 2010-03-15 72 It has been observed in Gen. II image intensifiers that a certain fraction of the photoelectrons emitted from the photocathode strike the glass matrix of the front micro-channel plate. These photoelectrons recoil with a distribution of lateral momenta, resulting in a diffusion of the electrons from their original incident position. When used with scintillating fibers, this can result in an Jennifer Fues; W. R. Binns; Paul L. Hink; Kim Slavis; D. H. Kaplan 1996-01-01 73 National Technical Information Service (NTIS) An overview on the theoretic formalism and up to date applications in quantum condensed matter physics of the effective potential and effective hamiltonian methods is given. The main steps of their unified derivation by the so-called pure-quantum self-con... A. Cuccoli R. Giachetti V. Tognetti R. Vaia P. Verrucchi 1995-01-01 74 Hydrogen and helium capture reactions are important in many astrophysical environments. Measurements in inverse kinematics using recoil separators have demonstrated a particularly sensitive technique for studying low-yield capture reactions.(M. S. Smith, C. E. Rolfs, and C. A. Barnes, Nucl. Instrum. Meth. Phys. Res. A306) (1991) 233. This approach allows a low background rate to be achieved with a high detection efficiency (about 50%) for the particles of interest using a device with only modest acceptance. Recoil separators using a variety of ion-optic configurations have been installed at numerous accelerator facilities in the past decade and have been used to measure, for example, alpha capture reactions using stable beams(D. Rogalla et al.), Eur. Phys. J. 6 (1999) 471. and proton capture reactions using radioactive ion beams.(S. Bishop et al.), Phys. Rev. Lett. 90 (2003) 162501. Measurements in inverse kinematics are the only viable means for studying reactions on short-lived nuclei that are crucial for understanding stellar explosions, and a recoil separator optimized for the measurement of capture reactions with radioactive ion beams figures prominently into the design of the low energy experimental area at the Rare Isotope Accelerator (RIA). The operational requirements for such a device will be outlined, and recoil separator designs and characteristics will be presented. Blackmon, J. C. 2004-10-01 75 It is commonly believed that for the understanding of the behaviour of large, macroscopic, objects there is no need to invoke any genuine quantum entanglement - Einstein's spooky action at a distance.'' We show that this belief is fundamentally mistaken and that entanglement is crucial to correctly describe some macroscopic properties of solids. We demonstrate that macroscopic thermodynamical properties - such as internal energy, heat capacity or magnetic susceptibility - can detect quantum entanglement in solids in the thermodynamical limit even at moderately high temperatures. We identify the parameter regions (critical values of magnetic field and temperature) within which entanglement is witnessed by these thermodynamical quantities. Finally, we demonstrate that two different experiments performed in 1963 and in 2000 clearly and conclusively indicate that entanglement exits in macroscopic samples of Cooper Nitrate at temperatures below 5 Kelvin. We interpret our results as indicating that entanglement may play a broad generic role in macroscopic phenomena. Brukner, Caslav; Vedral, Vlatko; Zeilinger, Anton 2005-03-01 76 In this paper, we explain a magneto quantum hydrodynamics (MQHD) method for the study of the quantum evolution of a system of spinning fermions in an external electromagnetic field. The fundamental equations of microscopic quantum hydrodynamics (the momentum balance equation and the magnetic moment density equation) are derived from the many-particle microscopic Schrödinger equation with a spin-spin and Coulomb modified Hamiltonian. Using the developed approach, an extended vorticity evolution equation for the quantum spinning plasma is derived. The effects of the new spin forces and spin-spin interaction contributions on the motion of fermions, the evolution of the magnetic moment density, and vorticity generation are predicted. The influence of the intrinsic spin of electrons on whistler mode turbulence is investigated. The results can be used for theoretical studies of spinning many-particle systems, especially dense quantum plasmas in compact astrophysical objects, plasmas in semiconductors, and micro-mechanical systems, in quantum X-ray free-electron lasers. Trukhanova, M., Iv. 2013-11-01 77 SciTech Connect This paper provides a brief history of the evolution of the Berkeley experiments on macroscopic quantum effects in superfluid helium. The narrative follows the evolution of the experiments proceeding from the detection of single vortex lines to vortex photography to quantized circulation in 3He to Josephson effects and superfluid gyroscopes in both 4He and 3He. Packard, Richard [Physics Department, University of California, Berkeley, CA 94720 (United States) 2006-09-07 78 In many situations, one can approximate the behavior of a quantum system, i.e. a wave function subject to a partial differential equation, by effective classical equations which are ordinary differential equations. A general method and geometrical picture are developed and shown to agree with effective action results, commonly derived through path integration, for perturbations around a harmonic oscillator ground state. Martin Bojowald; Aureliano Skirzewski 2006-01-01 79 SciTech Connect Within a perturbative cosmological regime of loop quantum gravity corrections to effective constraints are computed. This takes into account all inhomogeneous degrees of freedom relevant for scalar metric modes around flat space and results in explicit expressions for modified coefficients and of higher order terms. It also illustrates the role of different scales determining the relative magnitude of corrections. Our results demonstrate that loop quantum gravity has the correct classical limit, at least in its sector of cosmological perturbations around flat space, in the sense of perturbative effective theory. Bojowald, Martin; Kagan, Mikhail; Hernandez, Hector H.; Skirzewski, Aureliano [Institute for Gravitational Physics and Geometry, Pennsylvania State University, 104 Davey Lab, University Park, Pennsylvania 16802 (United States); Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Am Muehlenberg 1, D-14476 Potsdam (Germany) 2007-03-15 80 It has been observed in Gen. II image intensifiers that a certain fraction of the photoelectrons emitted from the photocathode strike the glass matrix of the front micro-channel plate. These photoelectrons recoil with a distribution of lateral momenta, resulting in a diffusion of the electrons from their original incident position. When used with scintillating fibers, this can result in an error in the assignment of the particular fiber associated with the event. Data have been analyzed from a calibration of the ACE/CRIS Scintillating Optical Fiber Trajectory Detector using 155Mev/nucleon He, Li, C, N, O, Ne, and Ar obtained at the Michigan State University National Superconducting Synchrotron Laboratory. The probability of a fiber misassignment due to the lateral diffusion of recoil photoelectrons will be presented as a function of the particle's incident angle and charge. Fues, Jennifer; Binns, W. R.; Hink, Paul L.; Slavis, Kim; Kaplan, D. H. 1996-05-01 81 In quantum gravity, the three fundamental constants, c, G, hbar provide us with a new scale. It is generally assumed that the quantum effects are important only in situations where the space-time curvature is large, of the order of this Planck scale. However, one can envisage situations in which physical fields have Planckian frequencies but such low amplitudes that the curvature is small. Can one trust classical or semi-classical theory in such domains? To probe this question, we will consider an exactly soluble model: three dimensional gravity coupled to Maxwell fields, assuming axi-symmetry. The quantum fluctuations in the geometry turn out to be very large unless the number and the frequency of photons satisfy the inequality N (hbar G?)^2 << 1. Thus, even when an electromagnetic wave of Planckian frequency has such low amplitude that the expectation value of the number of photons is just one, the quantum uncertainties in the metric are so large that classical and semi-classical approximations fail. This is a purely coulombic' effect, unrelated to gravitons. It arises because non-linearities of Einstein's equations magnify the tiny fluctuations in the Maxwell field to huge uncertainties in the geometry. These results also hold for certain sectors of four dimensional quantum gravity but are absent for linarized gravity coupled to matter. Ashtekar, Abhay 1997-04-01 82 The quantum Hall effect (QHE), one example of a quantum phenomenon that occurs on a truly macroscopic scale, has attracted intense interest since its discovery in 1980 and has helped elucidate many important aspects of quantum physics. It has also led to the establishment of a new metrological standard, the resistance quantum. Disappointingly, however, the QHE has been observed only K. S. Novoselov; Philip Kim; Zhigang Jiang; Horst Stormer; Yuanbo Zhang; Sergey Morozov; G. S. Boebinger; P. Kim; A. K. Geim 2007-01-01 83 We have studied experimentally the effect of a depolarizing quantum channel on polarization-encode weak pulse BB84 and SARG04 quantum cryptography. Experimental results show that, in real world conditions in which channel depolarization cannot be ignored, BB84 is more robust than SARG04 on the effect of the depolarizing quantum channel. Jeong, Youn-Chang; Kim, Yong-Su; Kim, Yoon-Ho 2010-08-01 84 Nuclear decay induced [sup 37]Cl ion desorption from the electron capture decay [sup 37]Ar[r arrow][sup 37]Cl+[nu] is reported for the first time. A mixture of one part [sup 36]Ar and [similar to]5[times]10[sup [minus]5] parts [sup 37]Ar ([sup 36\\/37]Ar) is physisorbed on a gold-plated Si wafer kept at 16 K under ultrahigh vacuum conditions. The time of flight (TOF) of recoiled L. Zhu; R. Avci; G. J. Lapeyre; M. M. Hindi; R. L. Kozub; S. J. Robinson 1994-01-01 85 Within a perturbative cosmological regime of loop quantum gravity corrections to effective constraints are computed. This takes into account all inhomogeneous degrees of freedom relevant for scalar metric modes around flat space and results in explicit expressions for modified coefficients and of higher order terms. It also illustrates the role of different scales determining the relative magnitude of corrections. Our Martin Bojowald; Hector H. Hernandez; Mikhail Kagan; Aureliano Skirzewski 2006-01-01 86 Within a perturbative cosmological regime of loop quantum gravity corrections to effective constraints are computed. This takes into account all inhomogeneous degrees of freedom relevant for scalar metric modes around flat space and results in explicit expressions for modified coefficients and of higher order terms. It also illustrates the role of different scales determining the relative magnitude of corrections. Our Martin Bojowald; Mikhail Kagan; Hector H. Hernández; Aureliano Skirzewski 2007-01-01 87 National Technical Information Service (NTIS) A theory of the fractional quantum Hall effect is constructed by introducing 3-particle interactions breaking the symmetry for nu =1/3 according to a degeneracy theorem proved here. An order parameter is introduced and a gap in the single particle spectru... 1984-01-01 88 PubMed Central Dissociation of molecular hydrogen is an important step in a wide variety of chemical, biological, and physical processes. Due to the light mass of hydrogen, it is recognized that quantum effects are often important to its reactivity. However, understanding how quantum effects impact the reactivity of hydrogen is still in its infancy. Here, we examine this issue using a well-defined Pd/Cu(111) alloy that allows the activation of hydrogen and deuterium molecules to be examined at individual Pd atom surface sites over a wide range of temperatures. Experiments comparing the uptake of hydrogen and deuterium as a function of temperature reveal completely different behavior of the two species. The rate of hydrogen activation increases at lower sample temperature, whereas deuterium activation slows as the temperature is lowered. Density functional theory simulations in which quantum nuclear effects are accounted for reveal that tunneling through the dissociation barrier is prevalent for H2 up to ?190 K and for D2 up to ?140 K. Kinetic Monte Carlo simulations indicate that the effective barrier to H2 dissociation is so low that hydrogen uptake on the surface is limited merely by thermodynamics, whereas the D2 dissociation process is controlled by kinetics. These data illustrate the complexity and inherent quantum nature of this ubiquitous and seemingly simple chemical process. Examining these effects in other systems with a similar range of approaches may uncover temperature regimes where quantum effects can be harnessed, yielding greater control of bond-breaking processes at surfaces and uncovering useful chemistries such as selective bond activation or isotope separation. 2014-01-01 89 PubMed Dissociation of molecular hydrogen is an important step in a wide variety of chemical, biological, and physical processes. Due to the light mass of hydrogen, it is recognized that quantum effects are often important to its reactivity. However, understanding how quantum effects impact the reactivity of hydrogen is still in its infancy. Here, we examine this issue using a well-defined Pd/Cu(111) alloy that allows the activation of hydrogen and deuterium molecules to be examined at individual Pd atom surface sites over a wide range of temperatures. Experiments comparing the uptake of hydrogen and deuterium as a function of temperature reveal completely different behavior of the two species. The rate of hydrogen activation increases at lower sample temperature, whereas deuterium activation slows as the temperature is lowered. Density functional theory simulations in which quantum nuclear effects are accounted for reveal that tunneling through the dissociation barrier is prevalent for H2 up to ?190 K and for D2 up to ?140 K. Kinetic Monte Carlo simulations indicate that the effective barrier to H2 dissociation is so low that hydrogen uptake on the surface is limited merely by thermodynamics, whereas the D2 dissociation process is controlled by kinetics. These data illustrate the complexity and inherent quantum nature of this ubiquitous and seemingly simple chemical process. Examining these effects in other systems with a similar range of approaches may uncover temperature regimes where quantum effects can be harnessed, yielding greater control of bond-breaking processes at surfaces and uncovering useful chemistries such as selective bond activation or isotope separation. PMID:24684530 Kyriakou, Georgios; Davidson, Erlend R M; Peng, Guowen; Roling, Luke T; Singh, Suyash; Boucher, Matthew B; Marcinkowski, Matthew D; Mavrikakis, Manos; Michaelides, Angelos; Sykes, E Charles H 2014-05-27 90 A non-integrable phase-factor global approach to gravitation is developed by using the similarity of teleparallel gravity to electromagnetism. The phase shifts of both the COW and the gravitational Aharonov Bohm effects are obtained. It is then shown, by considering a simple slit experiment, that in the classical limit the global approach yields the same result as the gravitational Lorentz force equation of teleparallel gravity. It represents, therefore, the quantum mechanical version of the classical description provided by the gravitational Lorentz force equation. As teleparallel gravity can be formulated independently of the equivalence principle, it will consequently require no generalization of this principle at the quantum level. Aldrovandi, R.; Pereira, J. G.; Vu, K. H. 2004-01-01 91 Monolayer graphite films, or graphene, have quasiparticle excitations that\\u000acan be described by 2+1 dimensional Dirac theory. We demonstrate that this\\u000aproduces an unconventional form of the quantized Hall conductivity $\\\\sigma_{xy}\\u000a= - (2 e^2\\/h)(2n+1)$ with $n=0,1,...$, that notably distinguishes graphene from\\u000aother materials where the integer quantum Hall effect was observed. This\\u000aunconventional quantization is caused by the quantum V. P. Gusynin; S. G. Sharapov 2005-01-01 92 PubMed Symmetry-breaking interactions have a crucial role in many areas of physics, ranging from classical ferrofluids to superfluid (3)He and d-wave superconductivity. For superfluid quantum gases, a variety of new physical phenomena arising from the symmetry-breaking interaction between electric or magnetic dipoles are expected. Novel quantum phases in optical lattices, such as chequerboard or supersolid phases, are predicted for dipolar bosons. Dipolar interactions can also enrich considerably the physics of quantum gases with internal degrees of freedom. Arrays of dipolar particles could be used for efficient quantum information processing. Here we report the realization of a chromium Bose-Einstein condensate with strong dipolar interactions. By using a Feshbach resonance, we reduce the usual isotropic contact interaction, such that the anisotropic magnetic dipole-dipole interaction between 52Cr atoms becomes comparable in strength. This induces a change of the aspect ratio of the atom cloud; for strong dipolar interactions, the inversion of ellipticity during expansion (the usual 'smoking gun' evidence for a Bose-Einstein condensate) can be suppressed. These effects are accounted for by taking into account the dipolar interaction in the superfluid hydrodynamic equations governing the dynamics of the gas, in the same way as classical ferrofluids can be described by including dipolar terms in the classical hydrodynamic equations. Our results are a first step in the exploration of the unique properties of quantum ferrofluids. PMID:17687319 Lahaye, Thierry; Koch, Tobias; Fröhlich, Bernd; Fattori, Marco; Metz, Jonas; Griesmaier, Axel; Giovanazzi, Stefano; Pfau, Tilman 2007-08-01 93 PubMed Over recent decades, quantum effects such as coherent electronic energy transfers, electron and hydrogen tunneling have been uncovered in biological processes. In this Perspective, we highlight some of the main conceptual and methodological tools employed in the field to investigate electron tunneling in proteins, with a particular emphasis on the methodologies we are currently developing. In particular, we describe our recent contributions to the development of a mixed quantum-classical framework aimed at describing physical systems lying at the border between the quantum and semi-classical worlds. We present original results obtained by combining our approach with constrained Density Functional Theory calculations. Moving to coarser levels of description, we summarize our latest findings on electron transfer between two redox proteins, thereby showing the stabilization of inter-protein, water-mediated, electron-transfer pathways. PMID:22434318 de la Lande, Aurélien; Babcock, Nathan S; Rezá?, Jan; Lévy, Bernard; Sanders, Barry C; Salahub, Dennis R 2012-05-01 94 The influence of electron-exchange and quantum screening on the collisional entanglement fidelity for the elastic electron–ion collision is investigated in degenerate quantum plasmas. The effective Shukla–Eliasson potential and the partial wave method are used to obtain the collisional entanglement fidelity in quantum plasmas as a function of the electron-exchange parameter, Fermi energy, plasmon energy and collision energy. The results show that the quantum screening effect enhances the entanglement fidelity in quantum plasmas. However, it is found that the electron-exchange effect strongly suppresses the collisional entanglement fidelity. Hence, we have found that the influence of the electron-exchange reduces the transmission of quantum information in quantum plasmas. In addition, it is found that, although the entanglement fidelity decreases with an increase of the Fermi energy, it increases with increasing plasmon energy in degenerate quantum plasmas. Hong, Woo-Pyo; Jung, Young-Dae 2014-06-01 95 We propose a way to realize a programmable quantum current standard (PQCS) from the Josephson voltage standard and the quantum Hall resistance standard (QHR) exploiting the multiple connection technique provided by the quantum Hall effect (QHE) and the exactness of the cryogenic current comparator. The PQCS could lead to breakthroughs in electrical metrology like the realization of a programmable quantum current source, a quantum ampere-meter, and a simplified closure of the quantum metrological triangle. Moreover, very accurate universality tests of the QHE could be performed by comparing PQCS based on different QHRs. Poirier, W.; Lafont, F.; Djordjevic, S.; Schopfer, F.; Devoille, L. 2014-01-01 96 Tagging with ?-particles at the focal plane of a recoil separator has been shown to be an effective technique for the study of exotic proton-rich nuclei. This article describes three new pieces of apparatus used to greatly improve the sensitivity of the recoil-beta tagging technique. These include a highly-pixelated double-sided silicon strip detector, a plastic phoswich detector for discriminating high-energy ?-particles, and a charged-particle veto box. The performance of these new detectors is described and characterised, and the resulting improvements are discussed. Henderson, J.; Ruotsalainen, P.; Jenkins, D. G.; Scholey, C.; Auranen, K.; Davies, P. J.; Grahn, T.; Greenlees, P. T.; Henry, T. W.; Herzá?, A.; Jakobsson, U.; Joshi, P.; Julin, R.; Juutinen, S.; Konki, J.; Leino, M.; Lotay, G.; Nichols, A. J.; Obertelli, A.; Pakarinen, J.; Partanen, J.; Peura, P.; Rahkila, P.; Sandzelius, M.; Sarén, J.; Sorri, J.; Stolze, S.; Uusitalo, J.; Wadsworth, R. 2013-04-01 97 The cosmological character of Gamma-Ray-Bursts 1 (GRBs), short and intense bursts of 100 keV - 1 MeV photons and 105 - 109 GeV neutrinos 2, makes it plausible to probe quantum gravity -which is expected to become important near Planck scale: EQG ˜ EP : = ? {\\hbar c5 /G} ? 1019 {GeV}. To see why one can look at -for instance- dispersion relations for photons in a space endowed with a structure required by an underlying quantum gravity theory3, say c2 {bar p2} /E2 = 1 + ? E/EQG + {O} (E/EQG )2 with ? a parameter of order one, E the energy and vec p the spatial momentum of the photon. The result is a speed v/c = (1/c)? E/? p = 1 - ? E/EQG + {O} (E/EQG )2 , and a retardation time w.r.t. to speed c propagation: ?t ? ?(E/EQG)(L/c), L being the distance traveled. With E ? 0.20MeV, L ? 1010ly, one gets ?t ? 10-5s. This can be contrasted with the time resolution of GRBs ?t ? 10-3s. Indeed, effective dispersion relations might have observable imprints of physics at Planck scale and may be other aspects of quantum gravity can be probed7. Here we point out how non perturbative quantum general relativity can yield such effective dispersion relations either for photons or neutrinosa... Alfaro, J.; Morales-Tecotl, H. A.; Urrutia, L. F. 2002-12-01 98 Using a quantum fluid model, the linear dispersion relation for FEL pumped by a short wavelength laser wiggler is deduced. Subsequently, a new quantum corrected resonance condition is obtained. It is shown that, in the limit of low energy electron beam and low frequency pump, the quantum recoil effect can be neglected, recovering the classical FEL resonance condition, ks=4kw?2. On the other hand, for short wavelength and high energy electron beam, the quantum recoil effect becomes strong and the resonance condition turns into ks=2kw/\\slgrc?3/2, with \\slgrc being the reduced Compton wavelength. As a result, a set of nonlinear coupled equations, which describes the quantum FEL dynamics as a three-wave interaction, is obtained. Neglecting wave propagation effects, this set of equations is solved numerically and results are presented. Monteiro, L. F.; Serbeto, A.; Tsui, K. H.; Mendonça, J. T.; Galva~o, R. M. O. 2013-07-01 99 SciTech Connect Using a quantum fluid model, the linear dispersion relation for FEL pumped by a short wavelength laser wiggler is deduced. Subsequently, a new quantum corrected resonance condition is obtained. It is shown that, in the limit of low energy electron beam and low frequency pump, the quantum recoil effect can be neglected, recovering the classical FEL resonance condition, k{sub s}=4k{sub w}?{sup 2}. On the other hand, for short wavelength and high energy electron beam, the quantum recoil effect becomes strong and the resonance condition turns into k{sub s}=2?(k{sub w}/?{sub c})?{sup 3/2}, with ?{sub c} being the reduced Compton wavelength. As a result, a set of nonlinear coupled equations, which describes the quantum FEL dynamics as a three-wave interaction, is obtained. Neglecting wave propagation effects, this set of equations is solved numerically and results are presented. Monteiro, L. F.; Serbeto, A.; Tsui, K. H. [Instituto de Física, Universidade Federal Fluminense, Campus da Praia Vermelha, Niterói, RJ 24210-346 (Brazil)] [Instituto de Física, Universidade Federal Fluminense, Campus da Praia Vermelha, Niterói, RJ 24210-346 (Brazil); Mendonça, J. T.; Galvão, R. M. O. [Instituto de Física, Universidade de São Paulo, São Paulo, SP 05508-090 (Brazil)] [Instituto de Física, Universidade de São Paulo, São Paulo, SP 05508-090 (Brazil) 2013-07-15 100 Effective equations often provide powerful tools to develop a systematic understanding of detailed properties of a quantum system. This is especially helpful in quantum cosmology where several conceptual and technical difficulties associated with the full quantum equations can be avoided in this way. Here, effective equations for Wheeler-DeWitt and loop quantizations of spatially flat, isotropic cosmological models sourced by a Martin Bojowald; Hector Hernández; Aureliano Skirzewski 2007-01-01 101 Several recent experiments, on various semiconductor heterostructures, have shown that an insulating phase at zero field can undergo a phase transition to the quantum Hall effect phase in an applied magnetic field.(see for example, H. W. Jiang, C. E. Johnson, K. L. Wang, and S. T. Hannahs, Phys. Rev. Lett. 71, 1439 (1993).) To understand this phenomena, we have studied the evolution of the quantum Hall effect at low fields.(I. Glozman, C. E. Johnson, and H. W. Jiang, Phys. Rev. Lett. 74, 594 (1995).) We found that the chemical potential of the lowest delocalized-state band not only deviates from the host Landau level center, but also "floats up" above the Fermi level as B goes to zero. In the region where the floating of delocalized states is observed, we have also found that the position of the conductivity minimum in the density - field plane to be strongly path-dependent. This path dependence has, in fact, given us information to quantitatively link the floating to Landau level mixing.(I. Glozman, C. E. Johnson, and H. W. Jiang, Phys. Rev. B 52, R14348 (1995).) The similar studies have been extended to the fractional quantum Hall effect regime.(L. W. Wong, H. W. Jiang, and W. J. Schaff, Phys. Rev. B54, Dec. 15, in press (1996).) The potential floating of the delocalized states of composite-fermions in a vanishing effective field will be discussed. Jiang, H. W. 1997-03-01 102 We present here a classical hydrodynamic model of a two-dimensional fluid which has many properties of the fractional quantum Hall effect (FQHE). This model incorporates the FQHE relation between the vorticity and density of the fluid and exhibits the Hall viscosity and Hall conductivity found in FQHE liquids. We describe the relation of the model to the Chern-Simons-Ginzburg-Landau theory of FQHE and show how Laughlin's wavefunction is annihilated by the quantum velocity operator. Abanov, Alexander G. 2013-07-01 103 A new quantum-effect electronic device is proposed which consists of two one-dimensional electron waveguides which, over a certain interaction length, come in close proximity to each other so that coherent quantum mechanical tunneling can take place between them. The degree of coupling between the two waveguides is controlled by modulating, through the field-effect action of a gate, the height of the potential energy barrier which separates them. If an electron wave packet is injected into this device through one of the waveguides, then the probability density of the electron wave function will oscillate back and forth between the two waveguides as the packet advances. The gate voltage can be adjusted to achieve complete electron transfer at either of the two waveguides at the output of the device. The device, therefore, behaves as a current switch. First-order calculations indicate that this device can be fabricated with state-of-the-art nanolithography. del Alamo, Jesus A.; Eugster, Cristopher C. 1990-01-01 104 As a realization of the quantum Zeno effect, we consider electron tunneling between two quantum dots with one of the dots coupled to a quantum point contact detector. The coupling leads to decoherence and to the suppression of tunneling. When the detector is driven with an ac voltage, a parametric resonance occurs which strongly counteracts decoherence. We propose a novel experiment with which it is possible to observe both the quantum Zeno effect and the parametric resonance in electric transport. Hackenbroich, G.; Rosenow, B.; Weidenmüller, H. A. 1998-12-01 105 We consider quantum electrodynamics at finite temperatures. By making use of the real time formalism we compute, on the one-loop level, the finite-temperature correction to the mass of the electron and to the anomalous magnetic moment aeth. The gauge-invariant correction to the electron mass is found to be a ten percent effect at a temperature of the order of 2×1010 G. Peressutti; B.-S. Skagerstam 1982-01-01 106 We study the effects of spin orbit interactions on the low energy electronic\\u000astructure of a single plane of graphene. We find that in an experimentally\\u000aaccessible low temperature regime the symmetry allowed spin orbit potential\\u000aconverts graphene from an ideal two dimensional semimetallic state to a quantum\\u000aspin Hall insulator. This novel electronic state of matter is gapped in C. L. Kane; E. J. Mele 2005-01-01 107 IN view of the numerous physical and astro-physical applications of the new quantum statistics it may be worth while to investigate the Joule-Thomson effect for a gas obeying Fermi-Dirac or Bose-Einstein statistics. The calculation is simple and runs on the usual lines. The results obtained are quite interesting. It is found that for a degenerate gas, degenerate in the sense D. S. Kothari; B. N. Srivasava 1937-01-01 108 The pinning effect is studied in a Gaussian quantum dot using the improved Wigner-Brillouin perturbation theory (IWBPT) in the presence of electron-phonon interaction. The electron ground state plus one phonon state is degenerate with the electron in the first excited state. The electron-phonon interaction lifts the degeneracy and the first excited states get pinned to the ground state plus one phonon state as we increase the confinement frequency. 2014-04-01 109 PubMed In grand unified theories with large numbers of fields, renormalization effects significantly modify the scale at which quantum gravity becomes strong. This in turn can modify the boundary conditions for coupling constant unification, if higher dimensional operators induced by gravity are taken into consideration. We show that the generic size of, and the uncertainty in, these effects from gravity can be larger than the two-loop corrections typically considered in renormalization group analyses of unification. In some cases, gravitational effects of modest size can render unification impossible. PMID:18999739 Calmet, Xavier; Hsu, Stephen D H; Reeb, David 2008-10-24 110 SciTech Connect We point out that mirror dark matter predicts low-energy (E{sub R} < or approx. 2 keV) electron recoils from mirror electron scattering as well as nuclear recoils from mirror ion scattering. The former effect is examined and applied to the recently released low-energy electron recoil data from the CDMS Collaboration. We speculate that the sharp rise in electron recoils seen in CDMS below 2 keV might be due to mirror electron scattering and show that the parameters suggested by the data are roughly consistent with the mirror dark matter explanation of the annual modulation signal observed in the DAMA/Libra and DAMA/NaI experiments. Thus, the CDMS data offer tentative evidence supporting the mirror dark matter explanation of the DAMA experiments, which can be more rigorously checked by future low-energy electron recoil measurements. Foot, R. [School of Physics, University of Melbourne, Victoria 3010 (Australia) 2009-11-01 111 This paper investigates GaAs/AlGaAs modified quantum dot nanocrystal and GaAs/AlGaAs/GaAs/AlGaAs quantum dot-quantum well heteronanocrystal. These quantum dots have been analyzed by the finite element numerical methods. Simulations carried out for state n=1, l=0, and m=0 which are original, orbital, and magnetic state of quantum numbers. The effects of variation in radius layers such as total radius, GaAs core, shell and AlGaAs barriers radius on the wavelength and emission coefficient are studied. For the best time, it has also investigated the effect of mole fraction on emission coefficient. Meanwhile, one of the problems in biological applications is alteration of the emission wavelength of a quantum dot by changing in its dimension. This problem will be resolved by changing in potential profile. Elyasi, P.; SalmanOgli, A. 2014-05-01 112 SciTech Connect An epitaxy-on-recoil-implanted-substrate (ERIS) technique is presented. A disordered surface layer, generated by forward recoil implantation of {approx}0.7-3x10{sup 15} cm{sup -2} of oxygen during Ar plasma etching of surface oxide, is shown to facilitate the subsequent epitaxial growth of {approx}25-35-nm-thick CoSi{sub 2} layers on Si(100). The dependence of the epitaxial fraction of the silicide on the recoil-implantation parameters is studied in detail. A reduction in the silicide reaction rate due to recoil-implanted oxygen is shown to be responsible for the observed epitaxial formation, similar to mechanisms previously observed for interlayer-mediated growth techniques. Oxygen is found to remain inside the fully reacted CoSi{sub 2} layer, likely in the form of oxide precipitates. The presence of these oxide precipitates, with only a minor effect on the sheet resistance of the silicide layer, has a surprisingly beneficial effect on the thermal stability of the silicide layers. The agglomeration of ERIS-grown silicide layers on polycrystalline Si is significantly suppressed, likely from a reduced diffusivity due to oxygen in the grain boundaries. The implications of the present technique for the processing of deep submicron devices are discussed. Hashimoto, Shin; Egashira, Kyoko; Tanaka, Tomoya; Etoh, Ryuji; Hata, Yoshifumi; Tung, R. T. [Corporate Manufacturing and Development Division, Semiconductor Company, Matsushita Electric Industrial Co., Ltd., Kyoto 617-8520 (Japan); Department of Physics, Brooklyn College, City University of New York, Brooklyn, New York 11210 (United States) 2005-01-15 113 Differences in the penetration of Rn-220 recoil atoms injected from a Ra-224 point source into single crystals of Si, SiO2 and KCl were used to record patterns showing directions and planes along which the recoil atoms channeled deeper into the crystal. C. Jech 1972-01-01 114 National Technical Information Service (NTIS) The original aim of this MURI was to combine an experimental effort to develop tools to manipulate quantum coherence in the solid state. based on metallic wires, quantum point contacts, and the quantum Hall effect, with theoretical efforts aimed at unders... C. M. Marcus 1999-01-01 115 In the framework of the one-boson exchange model, we have calculated the effective potentials between two heavy mesons BB¯* and DD¯* from the t- and u-channel ?-, ?-, ?-, ?-, and ?-meson exchanges with four kinds of quantum number: I=0, JPC=1++; I =0, JPC=1+-; I =1, JPC=1++; I =1, JPC=1+-. We keep the recoil corrections to the BB¯* and DD¯* systems up to O(1/M2). The spin-orbit force appears at O(1/M), which turns out to be important for the very loosely bound molecular states. Our numerical results show that the momentum-related corrections are unfavorable to the formation of the molecular states in the I =0, JPC=1++ and I =1, JPC=1+- channels in the DD¯* system. Zhao, Lu; Ma, Li; Zhu, Shi-Lin 2014-05-01 116 SciTech Connect Quasiparticles of charge 1/m in the Fractional Quantum Hall Effect form excitons, which are collective excitations physically similar to the transverse magnetoplasma oscillations of a Wigner crystal. A variational exciton wavefunction which shows explicitly that the magnetic length is effectively longer for quasiparticles than for electrons is proposed. This wavefunction is used to estimate the dispersion relation of these excitons and the matrix elements to generate them optically out of the ground state. These quantities are then used to describe a type of nonlinear conductivity which may occur in these systems when they are relatively clean. Laughlin, R.B. 1984-09-01 117 Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab.1-3 Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of momentum, and kinematics. It is possible to use the cannon, along with the output from an electronic force plate, as the basis for many other experiments in the laboratory. In this paper, we will discuss the recoil experiment done by our students in the lab and also mention a few other possibilities that this apparatus could be used for. Taylor, Brett 2006-12-01 118 An ensemble of periodically ordered atoms coherently scatters the light of an incident laser beam. The scattered and the incident light may interfere and give rise to a light intensity modulation and thus to optical dipole forces which, in turn, emphasize the atomic ordering. This positive feedback is at the origin of the collective atomic recoil laser (CARL). We demonstrate this dynamics using ultracold atoms confined by dipole forces in a unidirectionally pumped far red-detuned high-finesse optical ring cavity. Under the influence of an additional dissipative force exerted by an optical molasses the atoms, starting from an unordered distribution, spontaneously form a density grating moving at constant velocity. Additionally, steady state lasing is observed in the reverse direction if the pump laser power exceeds a certain threshold. We compare the dynamics of the atomic trajectories to the behavior of globally coupled oscillators, which exhibit phase transitions from incoherent to coherent states if the coupling strength exceeds a critical value. Courteille, Ph. W.; von Cube, C.; Deh, B.; Kruse, D.; Ludewig, A.; Slama, S.; Zimmermann, C. 2005-05-01 119 In the last stages of a black hole merger, the binary can experience a recoil due to asymmetric emission of gravitational radiation. Recent numerical relativity simulations suggest that the recoil velocity can be as high as a few thousands kilometers per second for particular configurations. We consider here the effect of a worst case scenario for orbital and phase configurations on the hierarchical evolution of the massive black hole (MBH) population. The orbital configuration and spin orientation in the plane is chosen to be the one yielding the highest possible kick. Masses and spin magnitudes are instead derived self-consistently from the MBH evolutionary models. If seeds form early, for example, as remnants of the first stars, almost the totality of the first few generation of binaries are ejected. The fraction of lost binaries decreases at later times due to a combination of the binary mass ratio distribution becoming shallower, and the deepening of the hosts' potential wells. If seeds form at later times, in more massive halos, then the retention rate is much higher. We show that the gravitational recoil does not pose a threat to the evolution of the MBH population that we observe locally in either case, although high-mass seeds seem to be favored. The gravitational recoil is instead a real hazard for (1) MBHs in biased halos at high redshift, where mergers are more common and the potential wells still relatively shallow. Similarly, it is very challenging to retain (2) MBHs merging in star clusters. Volonteri, Marta 2007-07-01 120 The manifestation of measurements, randomly distributed in time, on the evolution of quantum systems are analyzed in detail. The set of randomly distributed measurements (RDM) is modeled within the renewal theory, in which the distribution is characterized by the probability density function (PDF) W(t) of times t between successive events (measurements). The evolution of the quantum system affected by the RDM is shown to be described by the density matrix satisfying the stochastic Liouville equation. This equation is applied to the analysis of the RDM effect on the evolution of a two-level system for different types of RDM statistics, corresponding to different PDFs W(t). Obtained general results are illustrated as applied to the cases of the Poissonian (W(t) \\sim \\,e^{-w_r t}) and anomalous (W(t) ~ 1/t1 + ?, ? <= 1) RDM statistics. In particular, specific features of the quantum and inverse Zeno effects, resulting from the RDM, are thoroughly discussed. Shushin, A. I. 2011-02-01 121 We propose a class of variational wave functions with slow variation in spin and charge density and simple vortex structure at infinity, which properly generalize both the Laughlin quasiparticles and baby Skyrmions. We argue, on the basis of these wave functions and a spin-statistics relation in the relevant effective field theory, that the spin of the corresponding quasiparticle has a fractional part related in a universal fashion to the properties of the bulk state. We propose a direct experimental test of this claim. We show that certain spin-singlet quantum Hall states can be understood as arising from primary polarized states by Skyrmion condensation. Nayak, Chetan; Wilczek, Frank 1996-11-01 122 PubMed By using high-magnetic fields (up to 60 T), we observe compelling evidence of the integer quantum Hall effect in trilayer graphene. The magnetotransport fingerprints are similar to those of the graphene monolayer, except for the absence of a plateau at a filling factor of ?=2. At a very low filling factor, the Hall resistance vanishes due to the presence of mixed electron and hole carriers induced by disorder. The measured Hall resistivity plateaus are well reproduced theoretically, using a self-consistent Hartree calculations of the Landau levels and assuming an ABC stacking order of the three layers. PMID:22026788 Kumar, A; Escoffier, W; Poumirol, J M; Faugeras, C; Arovas, D P; Fogler, M M; Guinea, F; Roche, S; Goiran, M; Raquet, B 2011-09-16 123 PubMed A periodically driven system with spatial asymmetry can exhibit a directed motion facilitated by thermal or quantum fluctuations. This so-called ratchet effect has fascinating ramifications in engineering and natural sciences. Graphene is nominally a symmetric system. Driven by a periodic electric field, no directed electric current should flow. However, if the graphene has lost its spatial symmetry due to its substrate or adatoms, an electronic ratchet motion can arise. We report an experimental demonstration of such an electronic ratchet in graphene layers, proving the underlying spatial asymmetry. The orbital asymmetry of the Dirac fermions is induced by an in-plane magnetic field, whereas the periodic driving comes from terahertz radiation. The resulting magnetic quantum ratchet transforms the a.c. power into a d.c. current, extracting work from the out-of-equilibrium electrons driven by undirected periodic forces. The observation of ratchet transport in this purest possible two-dimensional system indicates that the orbital effects may appear and be substantial in other two-dimensional crystals such as boron nitride, molybdenum dichalcogenides and related heterostructures. The measurable orbital effects in the presence of an in-plane magnetic field provide strong evidence for the existence of structure inversion asymmetry in graphene. PMID:23334170 Drexler, C; Tarasenko, S A; Olbrich, P; Karch, J; Hirmer, M; Müller, F; Gmitra, M; Fabian, J; Yakimova, R; Lara-Avila, S; Kubatkin, S; Wang, M; Vajtai, R; Ajayan, P M; Kono, J; Ganichev, S D 2013-02-01 124 The infrared-ultraviolet properties of quantum gravity suggest on very general grounds that hard short distance scattering processes are highly suppressed for center of mass scattering energies beyond the fundamental Planck scale. If this scale is not too far above the electroweak scale, these nonperturbative quantum gravity effects could be manifest as an extinction of high transverse momentum jets at the LHC. To model these effects, we implement an extinction Monte Carlo modification of the PYTHIA event generator based on a Veneziano form factor with a large absorptive branch cut modification of hard QCD scattering processes. Using this we illustrate the leading effects of extinction on the inclusive jet transverse momentum spectrum at the LHC. We estimate that an extinction mass scale of up to roughly half the center of mass beam collision energy could be probed with high statistics data. Experimental searches at the LHC for jet extinction would be complementary to ongoing searches for the related phenomenon of excess production of high multiplicity final states. Kilic, Can; Lath, Amitabh; Rose, Keith; Thomas, Scott 2014-01-01 125 The quantum 1/f effect is a fundamental new aspect of quantum mechanics, quantum electrodynamics, and quantum field theory in general, with practical importance in most high-technology applications. It is based on the reaction of material currents to their spontaneous emission of infra-quanta such as photons, gravitons, transversal phonons, spin waves, etc. It is the result of decoherence of entangled states of particles and their spontaneous bremsstrahlung, a consequence of infrared-divergent interactions between particles and their field. It is the quantum manifestation of classical turbulence and it represents the most fundamental form of quantum chaos. It is described by the simple universal formula of conventional and coherent quantum 1/f noise, important in engineering, science and technology. It provides a new physical meaning to the notion of constant current,'' in time and space, similar to the 1937 definition of elastic processes by Bloch and Nordsieck. Finally, it is an interesting aspect of the concrete way in which matter generates its forms of existence, for instance time and space. Quantum 1/f spin decoherence rates, known to severely limit the performance of quantum computers, are shown here to be also affected by the quantum 1/f effect. Indeed, the elementary spin-flip process has a bremsstrahlung amplitude, leading to a non-stationary state with 1/f quantum fluctuations, and a disentangled system of non-localized low-frequency photons with negative conditional entropy. Thus, decoherence is due to the entangled system's interaction with the rest of the world, as is its quantum 1/f fluctuation which can be expressed in qubits. Increasing the spin-excess n is one way to reduce these fluctuations. In general, we find that both decoherence and its quantum 1/f noise could be controlled by better insulating the system in a new way. . Handel, Peter H. 2001-06-01 126 SciTech Connect We study the non-Markovian effect on the dynamics of the quantum discord by exactly solving a model consisting of two independent qubits subject to two zero-temperature non-Markovian reservoirs, respectively. Considering the two qubits initially prepared in Bell-like or extended Werner-like states, we show that there is no occurrence of the sudden death, but only instantaneous disappearance of the quantum discord at some time points, in comparison to the entanglement sudden death in the same range of the parameters of interest. This implies that the quantum discord is more useful than the entanglement to describe the quantum correlation involved in quantum systems. Wang Bo; Xu Zhenyu [Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071 (China); Graduate School of the Chinese Academy of Sciences, Beijing 100049 (China); Chen Zeqian; Feng Mang [Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071 (China) 2010-01-15 127 PubMed We propose a method for incorporating nuclear quantum effects in transition path sampling studies of systems that consist of a few degrees of freedom that must be treated quantum mechanically, while the rest are classical-like. We used the normal mode centroid method to describe the quantum subsystem, which is a method that is not CPU intensive but still reasonably accurate. We applied this mixed centroid/classical transition path sampling method to a model system that has nontrivial quantum behavior, and showed that it can capture the correct quantum dynamical features. PMID:20001028 Antoniou, Dimitri; Schwartz, Steven D 2009-12-14 128 DOEpatents A gas powered fluid gun for propelling a stream or slug of a fluid at high velocity toward a target. Recoil mitigation is provided that reduces or eliminates the associated recoil forces, with minimal or no backwash. By launching a quantity of water in the opposite direction, net momentum forces are reduced or eliminated. Examples of recoil mitigation devices include a cone for making a conical fluid sheet, a device forming multiple impinging streams of fluid, a cavitating venturi, one or more spinning vanes, or an annular tangential entry/exit. Grubelich, Mark C; Yonas, Gerold 2013-11-12 129 Single molecule DNA experiments often generate data from force versus extension measurements involving the tethering of a microsphere to one end of a single DNA molecule while the other is attached to a substrate. We show that the persistence length of single DNA molecules can also be measured based on the recoil dynamics of these DNA-microsphere complexes if appropriate corrections are made to the friction coefficient of the microsphere in the vicinity of the substrate. Comparison between computer simulated recoil curves, generated from the corresponding Langevin equation, and experimental recoils are used to assure the validity of data analysis. Neto, José Coelho; Dickman, Ronald; Mesquita, O. N. 2005-01-01 130 The quantum Hall effect arises from the interplay between localized and extended states that form when electrons, confined to two dimensions, are subject to a perpendicular magnetic field. The effect involves exact quantization of all the electronic transport properties owing to particle localization. In the conventional theory of the quantum Hall effect, strong-field localization is associated with a single-particle drift S. Ilani; J. Martin; E. Teitelbaum; J. H. Smet; D. Mahalu; V. Umansky; A. Yacoby 2004-01-01 131 SciTech Connect We study the effect of magnetic field and geometric confinement on excitons confined to a quantum ring. We use analytical matrix elements of the Coulomb interaction and diagonalize numerically the effective-mass Hamiltonian of the problem. To explore the role of different boundary conditions, we investigate the quantum ring structure with a parabolic confinement potential, which allows the wave functions to be expressed in terms of center of mass and relative degrees of freedom of the exciton. On the other hand, wave functions expressed in terms of Bessel functions for electron and hole are used for a hard-wall confinement potential. The binding energy and electron--hole separation of the exciton are calculated as function of the width of the ring and the strength of an external magnetic field. The linear optical susceptibility as a function of magnetic fields is also discussed. We explore the Coulomb electron--hole correlation and magnetic confinement for several ring width and size combinations. The Aharanov--Bohm oscillations of exciton characteristics predicted for one-dimensional rings are found to not be present in these finite-width systems. Song, Jakyoung; Ulloa, Sergio E. 2001-03-15 132 SciTech Connect The guiding effect of InGaAs quantum wells in GaAs- and InP-based semiconductor lasers has been studied theoretically and experimentally. The results demonstrate that such waveguides can be effectively used in laser structures with a large refractive index difference between the quantum well material and semiconductor matrix and a large number of quantum wells (e.g. in InP-based structures). (semiconductor lasers. physics and technology) Aleshkin, V Ya; Dikareva, Natalia V; Dubinov, A A; Zvonkov, B N; Karzanova, Maria V; Kudryavtsev, K E; Nekorkin, S M; Yablonskii, A N 2013-05-31 133 We develop an ensemble density functional theory for the fractional quantum Hall effect using a local density approximation. Model calculations for edge reconstructions of a spin-polarized quantum dot give results in good agreement with semiclassical and Hartree-Fock calculations, and with small system numerical diagonalizations. This establishes the usefulness of density functional theory to study the fractional quantum Hall effect, which opens up the possibility of studying inhomogeneous systems with many more electrons than has heretofore been possible. Heinonen, O.; Lubin, M. I.; Johnson, M. D. 1995-11-01 134 The role of quantum mechanics in biological organisms has been a fundamental question of twentieth-century biology. It is only now, however, with modern experimental techniques, that it is possible to observe quantum mechanical effects in bio-molecular complexes directly. Indeed, recent experiments have provided evidence that quantum effects such as wave-like motion of excitonic energy flow, delocalization and entanglement can be G. R. Fleming; S. F. Huelga; M. B. Plenio 2011-01-01 135 We investigate the quantum photovoltaic effect in double quantum dots by applying the nonequilibrium quantum master equation. A drastic suppression of the photovoltaic current is observed near the open circuit voltage, which leads to a large filling factor. We find that there always exists an optimal inter-dot tunneling that significantly enhances the photovoltaic current. Maximal output power will also be obtained around the optimal inter-dot tunneling. Moreover, the open circuit voltage behaves approximately as the product of the eigen-level gap and the Carnot efficiency. These results suggest a great potential for double quantum dots as efficient photovoltaic devices. Wang, Chen; Ren, Jie; Cao, Jianshu 2014-04-01 136 We analyze a recently proposed method to create fractional quantum Hall (FQH) states of atoms confined in optical lattices [A. Sørensen , Phys. Rev. Lett. 94, 086803 (2005)]. Extending the previous work, we investigate conditions under which the FQH effect can be achieved for bosons on a lattice with an effective magnetic field and finite on-site interaction. Furthermore, we characterize the ground state in such systems by calculating Chern numbers which can provide direct signatures of topological order and explore regimes where the characterization in terms of wave-function overlap fails. We also discuss various issues which are relevant for the practical realization of such FQH states with ultracold atoms in an optical lattice, including the presence of a long-range dipole interaction which can improve the energy gap and stabilize the ground state. We also investigate a detection technique based on Bragg spectroscopy to probe these systems in an experimental realization. Hafezi, M.; Sørensen, A. S.; Demler, E.; Lukin, M. D. 2007-08-01 137 SciTech Connect Another consequence is that the gravitons on-shell self-energy is negative and infrared divergent at one loop, thereby inducing a negative infrared divergence in the two-loop vacuum energy. We analyze these effects in the context of causal evolution from an initial patch of one Hubble volume which begins inflation at finite times in one of the homogeneous and isotropic Fock states of free QCG. Up to some tedious but probably manageable tensor algebra we show that quantum infrared effects exert an ever increasing drag on the backgrounds expansion for as long as perturbation theory remains valid. A rough estimate of the relaxation time is easily consistent with enough inflation to solve the smoothness problem. {copyright} 1995 Academic Press, Inc. Tsamis, N.C. [Department of Physics, University of Crete, Heraklion, Crete 71409, Greece and Theory Group, FO.R.T.H., Heraklion, Crete 71110 (Greece)] [Department of Physics, University of Crete, Heraklion, Crete 71409, Greece and Theory Group, FO.R.T.H., Heraklion, Crete 71110 (Greece); Woodard, R.P. [Department of Physics, University of Floride, Gainesville, Florida 32611 (United States)] [Department of Physics, University of Floride, Gainesville, Florida 32611 (United States) 1995-02-15 138 We report experimental studies on the effect of the depolarizing quantum channel on weak-pulse BB84 and SARG04 quantum cryptography. The experimental results show that, in real world conditions in which channel depolarization cannot be ignored, BB84 should perform better than SARG04 under the most general eavesdropping attack. Jeong, Y.-C.; Kim, Y.-S.; Kim, Y.-H. 2011-08-01 139 The transverse structure of an optical field can carry a large amount of information. Such a simple concept is the basis for important technologies such as imaging and photolithography. However, some effects in nature will effectively destroy any useful transverse structure the field may possess. In this thesis, both desirable and undesirable transverse optical effects will be studied. The ultimate limit to the amount of energy that may be usefully transmitted through a medium in a laser beam is imposed by the nonlinear response of the medium. This nonlinearity can be a thermal effect for continuous-wave or long-pulse lasers, while for short-pulse lasers will tend to be an electronic or molecular effect. Whenever the intensity-nonlinearity product is too large, the transverse structure of the beam will be so greatly distorted as to make the beam essentially useless. This beam degradation is discussed in the thesis for both the continuous-wave thermal case as well as for the short-pulse case, known as laser beam filamentation. The undesirable effect of filamentation is a single-beam four-wave mixing effect. Similar physical processes exist for two-beam four-wave mixing. In the two-beam case, however, there is reason to believe that the generated transverse structure may possess very useful properties for applications in quantum optics. Such effects are explored in this thesis. After discussing physical effects that can alter the transverse structure of a beam, two applications of the use of transverse structure to carry information are also explored. The first of these is coincidence imaging. This is a technique for generating an image of an object with photons that do not directly interact with the object. Experiments were performed to compare the quality of the technique when done using classical versus quantum methods. The second application of transverse effects that is developed is a new method for generating lithographic patterns with super-resolution. The method is shown theoretically for any level of resolution improvement, and is demonstrated experimentally for up to a factor of three improvement over the traditionally accepted limit. Bentley, Sean J. 140 We study the effect of quantum interference on the structure and properties of spontaneous and stimulated transitions in a degenerate V-type three-level atom with an arbitrary total momentum of each state. Explicit expressions for the factors in the terms of the relaxation operator and stimulated transition operator with account of quantum interference effects are obtained. It has been demonstrated that A. A. Panteleev; Vl. K. Roerich 2004-01-01 141 SciTech Connect Effective equations often provide powerful tools to develop a systematic understanding of detailed properties of a quantum system. This is especially helpful in quantum cosmology where several conceptual and technical difficulties associated with the full quantum equations can be avoided in this way. Here, effective equations for Wheeler-DeWitt and loop quantizations of spatially flat, isotropic cosmological models sourced by a massive or interacting scalar are derived and studied. The resulting systems are remarkably different from that given for a free, massless scalar. This has implications for the coherence of evolving states and the realization of a bounce in loop quantum cosmology. Bojowald, Martin; Hernandez, Hector; Skirzewski, Aureliano [Institute for Gravitation and the Cosmos, Pennsylvania State University, 104 Davey Lab, University Park, Pennsylvania 16802 (United States); Universidad Autonoma de Chihuahua, Facultad de Ingenieria, Nuevo Campus Universitario, Chihuahua 31125 (Mexico); Centro de Fisica Fundamental, Universidad de los Andes, Merida 5101 (Venezuela) 2007-09-15 142 PubMed Interacting orbital degrees of freedom in a Mott insulator are essentially directional and frustrated. In this Letter, the effect of dilution in a quantum-orbital system with this kind of interaction is studied by analyzing a minimal orbital model which we call the two-dimensional quantum compass model. We find that the decrease of the ordering temperature due to dilution is stronger than that in spin models, but it is also much weaker than that of the classical model. The difference between the classical and the quantum-orbital systems arises from the enhancement of the effective dimensionality due to quantum fluctuations. PMID:17678040 Tanaka, Takayoshi; Ishihara, Sumio 2007-06-22 143 Interacting orbital degrees of freedom in a Mott insulator are essentially directional and frustrated. In this Letter, the effect of dilution in a quantum-orbital system with this kind of interaction is studied by analyzing a minimal orbital model which we call the two-dimensional quantum compass model. We find that the decrease of the ordering temperature due to dilution is stronger than that in spin models, but it is also much weaker than that of the classical model. The difference between the classical and the quantum-orbital systems arises from the enhancement of the effective dimensionality due to quantum fluctuations. Tanaka, Takayoshi; Ishihara, Sumio 2007-06-01 144 Recent experiments prompt rethinking of the basics of elastic and inelastic electron scattering in electron microscopy. Standard approximations of elastic scattering largely based Bragg's law seem less clearly relevant when individual columns or even single atoms are probed. Phase shift analysis of atomic scattering can provide some checks and insights. The dielectric theory of aloof beam interactions is severely tested by observations of nanoparticle recoil but may be capable of explaining de-coherence effects induced by thermal fluctuations. Howie, Archie 2014-06-01 145 PubMed Continuous and pulsed quantum Zeno effects were observed using a 87Rb Bose-Einstein condensate. Oscillations between two ground hyperfine states of a magnetically trapped condensate, externally driven at a transition rate omega(R), were suppressed by destructively measuring the population in one of the states with resonant light. The suppression of the transition rate in the two-level system was quantified for pulsed measurements with a time interval deltat between pulses and continuous measurements with a scattering rate gamma. We observe that the continuous measurements exhibit the same suppression in the transition rate as the pulsed measurements when gammadeltat=3.60(0.43), in agreement with the predicted value of 4. Increasing the measurement rate suppressed the transition rate down to 0.005 omega(R). PMID:17280408 Streed, Erik W; Mun, Jongchul; Boyd, Micah; Campbell, Gretchen K; Medley, Patrick; Ketterle, Wolfgang; Pritchard, David E 2006-12-31 146 PubMed Central We construct a descriptive toy model that considers quantum effects on biological evolution starting from Chaitin's classical framework. There are smart evolution scenarios in which a quantum world is as favorable as classical worlds for evolution to take place. However, in more natural scenarios, the rate of evolution depends on the degree of entanglement present in quantum organisms with respect to classical organisms. If the entanglement is maximal, classical evolution turns out to be more favorable. 2012-01-01 147 Stents are artificial implants that provide scaffolding to a cavity inside the body. This paper presents a new luminal device for reducing the mechanical failure of stents due to recoil, which is one of the most important issues in stenting. This device, which we call a recoil-resilient ring (RRR), is utilized standalone or potentially integrated with existing stents to address the problem of recoil. The proposed structure aims to minimize the need for high-pressure overexpansion that can induce intra-luminal trauma and excess growth of vascular tissue causing later restenosis. The RRR is an overlapped open ring with asymmetrical sawtooth structures that are intermeshed. These teeth can slide on top of each other, while the ring is radially expanded, but interlock step-by-step so as to keep the final expanded state against compressional forces that normally cause recoil. The RRRs thus deliver balloon expandability and, when integrated with a stent, bring both radial rigidity and longitudinal flexibility to the stent. The design of the RRR is investigated through finite element analysis (FEA), and then the devices are fabricated using micro-electro-discharge machining of 200-µm-thick Nitinol sheet. The standalone RRR is balloon expandable in vitro by 5-7 Atm in pressure, which is well within the recommended in vivo pressure ranges for stenting procedures. FEA compression tests indicate 13× less reduction of the cross-sectional area of the RRR compared with a typical stainless steel stent. These results also show perfect elastic recovery of the RRR after removal of the pressure compared to the remaining plastic deformations of the stainless steel stent. On the other hand, experimental loading tests show that the fabricated RRRs have 2.8× radial stiffness compared to a two-column section of a commercial stent while exhibiting comparable elastic recovery. Furthermore, testing of in vitro expansion in a mock artery tube shows around 2.9% recoil, approximately 5-11× smaller than the recoil reported for commercial stents. These experimental results demonstrate the effectiveness of the device design for the targeted luminal support and stenting applications. Mehdizadeh, Arash; Ali, Mohamed Sultan Mohamed; Takahata, Kenichi; Al-Sarawi, Said; Abbott, Derek 2013-06-01 148 We study the conductance through a triangular triple quantum dot, which is connected to two noninteracting leads, using the numerical renormalization group (NRG). It is found that the system shows a variety of Kondo effects depending on the filling of the triangle. The SU(4) Kondo effect occurs at half-filling, and a sharp conductance dip due to a phase lapse appears in the gate-voltage dependence. Furthermore, when four electrons occupy the three sites on average, a local S=1 moment, which is caused by the Nagaoka mechanism, is induced along the triangle. The temperature dependence of the entropy and spin susceptibility of the triangle shows that this moment is screened by the conduction electrons via two separate stages at different temperatures. The two-terminal and four-terminal conductances show a clear difference at the gate voltages, where the SU(4) or the S=1 Kondo effects occur[1]. We will also discuss effects of deformations of the triangular configuration, caused by the inhomogeneity in the inter-dot couplings and in the gate voltages. [4pt] [1] T.Numata, Y.Nisikawa, A.Oguri, and A.C.Hewson: arXiv:0808.3496. Oguri, Akira; Numata, Takahide; Nisikawa, Yunori; Hewson, A. C. 2009-03-01 149 Recoil-ion and electron momentum spectroscopy is a rapidly developing technique that allows one to measure the vector momenta of several ions and electrons resulting from atomic or molecular fragmentation. In a unique combination, large solid angles close to 4pi and superior momentum resolutions around a few per cent of an atomic unit (a.u.) are typically reached in state-of-the art machines, so-called reaction-microscopes. Evolving from recoil-ion and cold target recoil-ion momentum spectroscopy (COLTRIMS), reaction-microscopes—the bubble chambers of atomic physics'—mark the decisive step forward to investigate many-particle quantum-dynamics occurring when atomic and molecular systems or even surfaces and solids are exposed to time-dependent external electromagnetic fields. This paper concentrates on just these latest technical developments and on at least four new classes of fragmentation experiments that have emerged within about the last five years. First, multi-dimensional images in momentum space brought unprecedented information on the dynamics of single-photon induced fragmentation of fixed-in-space molecules and on their structure. Second, a break-through in the investigation of high-intensity short-pulse laser induced fragmentation of atoms and molecules has been achieved by using reaction-microscopes. Third, for electron and ion-impact, the investigation of two-electron reactions has matured to a state such that the first fully differential cross sections (FDCSs) are reported. Fourth, comprehensive sets of FDCSs for single ionization of atoms by ion-impact, the most basic atomic fragmentation reaction, brought new insight, a couple of surprises and unexpected challenges to theory at keV to GeV collision energies. In addition, a brief summary on the kinematics is provided at the beginning. Finally, the rich future potential of the method is briefly envisaged. Ullrich, J.; Moshammer, R.; Dorn, A.; Dörner, R.; Schmidt, L. Ph H.; Schmidt-Böcking, H. 2003-09-01 150 NASA Technical Reports Server (NTRS) Recoil "kicks" induced by gravitational radiation are expected in the inspiral and merger of black holes. Recently the numerical relativity community has begun to measure the significant kicks found when both unequal masses and spins are considered. Because understanding the cause and magnitude of each component of this kick may be complicated in inspiral simulations, we consider these effects in the context of a simple test problem. We study recoils from collisions of binaries with initially head-on trajectories, starting with the simplest case of equal masses with no spin; adding spin and varying the mass ratio, both separately and jointly. We find spin-induced recoils to be significant even in head-on configurations. Additionally, it appears that the scaling of transverse kicks with spins is consistent with post-Newtonian (PN) theory, even though the kick is generated in the nonlinear merger interaction, where PN theory should not apply. This suggests that a simple heuristic description might be effective in the estimation of spin-kicks. Choi, Dae-Il; Kelly, Bernard J.; Boggs, William D.; Baker, John G.; Centrella, Joan; Van Meter, James 2007-01-01 151 SciTech Connect The author discusses future directions in the development of classical hydrodynamics for extended nucleons, corresponding to nucleons of finite size interacting with massive meson fields. This new theory provides a natural covariant microscopic approach to relativistic nucleus-nucleus collisions that includes automatically spacetime nonlocality and retardation, nonequilibrium phenomena, interactions among all nucleons, and particle production. The present version of the theory includes only the neutral scalar ({sigma}) and neutral vector ({omega}) meson fields. In the future, additional isovector pseudoscalar ({pi}{sup +}, {pi}{sup {minus}}, {pi}{sup 0}), isovector vector ({rho}{sup +}, {rho}{sup {minus}}, {rho}{sup 0}), and neutral pseudoscalar ({eta}) meson fields should be incorporated. Quantum size effects should be included in the equations of motion by use of the spreading function of Moniz and Sharp, which generates an effective nucleon mass density smeared out over a Compton wavelength. However, unlike the situation in electrodynamics, the Compton wavelength of the nucleon is small compared to its radius, so that effects due to the intrinsic size of the nucleon dominate. Nix, J.R. 1994-03-01 152 SciTech Connect The article is dedicated to the review and analysis of the effects and processes occurring in Si-Ge quantum size semiconductor structures upon particle irradiation including ion implantation. Comparisons to bulk materials are drawn. The reasons of the enhanced radiation hardness of superlattices and quantum dots are elucidated. Some technological applications of the radiation treatment are reviewed. Sobolev, N. A., E-mail: sobolev@ua.pt [Universidade de Aveiro, Departamento de Fisica and I3N (Portugal) 2013-02-15 153 Interacting orbital degrees of freedom in a Mott insulator are essentially directional and frustrated. In this Letter, the effect of dilution in a quantum-orbital system with this kind of interaction is studied by analyzing a minimal orbital model which we call the two-dimensional quantum compass model. We find that the decrease of the ordering temperature due to dilution is stronger Takayoshi Tanaka; Sumio Ishihara 2007-01-01 154 The effect of quantum mechanics violation due to quantum gravity on neutrino oscillation is investigated. It is found that the mechanism introduced by Ellis, Hagelin, Nanopoulos, and Srednicki through the modification of the Liouville equation can affect neutrino oscillation behavior and may be taken as a new solution of the solar neutrino problem. Liu, Yong; Hu, Liangzhong; Ge, Mo-Lin 1997-11-01 155 SciTech Connect The oscillatory screening effects on the transition bremsstrahlung radiation due to the polarization interaction between the electron and shielding cloud are investigated in dense quantum plasmas. The impact-parameter analysis with the modified Debye-Hueckel potential is applied to obtain the bremsstrahlung radiation cross section as a function of the quantum wave number, impact parameter, photon energy, and projectile energy. The results show that the oscillatory quantum screening effect strongly suppresses the transition bremsstrahlung radiation spectrum in dense quantum plasmas. It is also found that the oscillatory quantum screening effect is more significant near the maximum peak of the bremsstrahlung radiation cross section. In addition, the maximum peak of the bremsstrahlung radiation cross section is getting close to the center of the shielding cloud as increasing quantum wave number. It is interesting to note that the range of the bremsstrahlung photon energy would be broadened with an increase of the oscillatory screening effect. It is also found that the oscillatory screening effects on the transition bremsstrahlung spectrum decreases with increasing projectile energy. Jung, Young-Dae [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791, South Korea and Department of Electrical and Computer Engineering, MC 0407, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0407 (United States) 2011-06-15 156 SciTech Connect The quantum spin Hall (QSH) state is a topologically non-trivial state of quantum matter which preserves time-reversal symmetry; it has an energy gap in the bulk, but topologically robust gapless states at the edge. Recently, this novel effect has been predicted and observed in HgTe quantum wells. In this work we predict a similar effect arising in Type-II semiconductor quantum wells made from InAs/GaSb/AlSb. Because of a rare band alignment the quantum well band structure exhibits an 'inverted' phase similar to CdTe/HgTe quantum wells, which is a QSH state when the Fermi level lies inside the gap. Due to the asymmetric structure of this quantum well, the effects of inversion symmetry breaking and inter-layer charge transfer are essential. By standard self-consistent calculations, we show that the QSH state persists when these corrections are included, and a quantum phase transition between the normal insulator and the QSH phase can be electrically tuned by the gate voltage. Liu, Chaoxing; /Tsinghua U., Beijing /Stanford U., Phys. Dept.; Hughes, Taylor L.; Qi, Xiao-Liang; /Stanford U., Phys. Dept.; Wang, Kang; /UCLA; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. 2010-03-19 157 PubMed In this study, the recoil and oscillation of single and consecutively printed drops on substrates of different wettabilities are examined using a high speed camera. The results show that, for a droplet impact on a dry surface at Weber number ~ O (1), both inertia and capillary effects are important in the initial spreading regime before the droplet starts to oscillate. For a substrate of higher wettability, drop oscillation decays faster due to a stronger viscous dissipation over a longer oscillation path parallel to the substrate. It is also found that when a drop impacts on a sessile drop sitting on a hydrophobic substrate, the combined drop recoil twice resulted from the coalescence of the two drops, whereas no recoil is observed for the impact of a single drop on a dry surface under the same condition. Furthermore, a single-degree-of-freedom vibration model for the height oscillation of single and combined drops on a hydrophobic substrate is established. For the condition considered, the model predictions match well with the experiments. The results also show the extent to which the increase in the liquid viscosity facilitates oscillation damping and the quantitative extension of the oscillation time of a combined drop compared to a single drop. PMID:23360081 Yang, Xin; Chhasatia, Viral H; Sun, Ying 2013-02-19 158 PubMed Quantum confinement can dramatically slow down electron-phonon relaxation in nanoclusters. Known as the phonon bottleneck, the effect remains elusive. Using a state-of-the-art time-domain ab initio approach, we model the observed bottleneck in CdSe quantum dots and show that it occurs under quantum Zeno conditions. Decoherence in the electronic subsystem, induced by elastic electron-phonon scattering, should be significantly faster than inelastic scattering. Achieved with multiphonon relaxation, the phonon bottleneck is broken by Auger processes and structural defects, rationalizing experimental difficulties. PMID:23683182 Kilina, Svetlana V; Neukirch, Amanda J; Habenicht, Bradley F; Kilin, Dmitri S; Prezhdo, Oleg V 2013-05-01 159 We study the evolution of the two-terminal conductance plateaus with a magnetic field for armchair graphene nanoribbons (GNRs) and graphene nanoconstrictions (GNCs). For GNRs, the conductance plateaus of (2e2)/(h) at zero magnetic field evolve smoothly to the quantum Hall regime, where the plateaus in conductance at even multiples of (2e2)/(h) disappear. It is shown that the relation between the energy and magnetic field does not follow the same behavior as in “bulk” graphene, reflecting the different electronic structure of a GNR. For the nanoconstrictions we show that the conductance plateaus do not have the same sharp behavior in zero magnetic field as in a GNR, which reflects the presence of backscattering in such structures. Our results show good agreement with recent experiments on high-quality graphene nanoconstrictions. The behavior with the magnetic field for a GNC shows some resemblance to the one for a GNR but now depends also on the length of the constriction. By analyzing the evolution of the conductance plateaus in the presence of the magnetic field we can obtain the width of the structures studied and show that this is a powerful experimental technique in the study of the electronic and structural properties of narrow structures. Guimarães, M. H. D.; Shevtsov, O.; Waintal, X.; van Wees, B. J. 2012-02-01 160 In the phenomenon of diffraction of an electromagnetic wave in a lattice, the phase relations between the transmitted wave and the diffracted wave are fundamental. A vector can be written that describes the quantum state of the radiation in the above-mentioned case, and the photoelectric absorption cross section can be computed. It will be proved that the way the dynamical theory of x-ray diffraction deals with absorption is not correct because it does not consider the interference between the ? and ? rays; this phenomenon causes the failure of the dynamical theory in the explication of the Borrmann effect, which will be explained by means of a theory proposed herein. In addition, a new phenomenon is described that is not expected by the dynamical theory, the coherent scattering of photons from the ? ray to the ? ray; such a phenomenon is the real origin of the ?-ray extinction in the crystal and a quite anomalous expression for the extinction depth is found that is inversely proportional to the incident intensity. Biagini, M. 1990-10-01 161 Interaction driven integer quantum Hall effects are anticipated [1]in graphene bilayers because of the near-degeneracy of eight Landau levels which appear near the neutral system Fermi level at filling factors between ?=-4 and ?=4. The bilayer graphene octet exhibtits a wide variety of broken symmetry states, with Ising, XY and Heisenberg character which can be controlled by an external field which creates an electric potential difference between the two layers. Because of the peculiarities of the bilayer graphene electronic structure states with n=0 and n=1 orbital character are degenerate. I will explain predictions that an intra-Landau-level cyclotron resonance signal will appear at some odd-integer filling factors, accompanied by collective modes which are nearly gapless and have approximate k^3/2 dispersion. This talk will be based on work performed in collaboration with Yafis Barlas, Rene Cote, Kentaro Nomura, and Jules Lambert. [4pt] [1] Y. Barlas et al. , Phys. Rev. Lett. 101, 097601(2008). MacDonald, Allan H. 2009-03-01 162 The interaction of an electronic spin with its nuclear environment, an issue known as the central spin problem, has been the subject of considerable attention due to its relevance for spin-based quantum computation using semiconductor quantum dots. Independent control of the nuclear spin bath using nuclear magnetic resonance techniques and dynamic nuclear polarization using the central spin itself offer unique possibilities for manipulating the nuclear bath with significant consequences for the coherence and controlled manipulation of the central spin. Here we review some of the recent optical and transport experiments that have explored this central spin problem using semiconductor quantum dots. We focus on the interaction between 104-106 nuclear spins and a spin of a single electron or valence-band hole. We also review the experimental techniques as well as the key theoretical ideas and the implications for quantum information science. Chekhovich, E. A.; Makhonin, M. N.; Tartakovskii, A. I.; Yacoby, A.; Bluhm, H.; Nowack, K. C.; Vandersypen, L. M. K. 2013-06-01 163 We theoretically investigate the optical properties of the exciton confined in parabolic quantum-dot , with and without electric field, by means of perturbative-variational method. The quantum-dot size enhances the 1s eigenvalue ahd oscillator strength . In smaller dot the relative extension of the exciton wave function is equal to the size of the dot . The 1s exciton bihding energy S. JAZIRI; G. BASTARD; R. BENNACEUR 1993-01-01 164 The calculation results show that the bonding energy and electronic states of silicon quantum dots (Si QDs) are different on various curved surfaces (CS), for example, a Si-O-Si bridge bond on curved surface provides the localized levels in band gap and its bonding energy is shallower than that on facet. Curved surface breaks symmetrical shape of silicon quantum dots on which some bonds can produce localized electronic states in band gap. The red-shifting of photoluminescence spectra on smaller silicon quantum dots can be explained by CS effect. In CS effect, surface curvature is determined by the shape of Si QDs or silicon nanostructures, which is independent of their sizes. The CS effect has the interesting fundamental physical properties in nanophysics as that of quantum confinement effect. Huang, Wei-Qi; Huang, Zhong-Mei; Cheng, Han-Qiong; Miao, Xin-Jian; Shu, Qin; Liu, Shi-Rong; Qin, Chao-Jian 2012-10-01 165 PubMed Central It is well known that the topological phenomena with fractional excitations, the fractional quantum Hall effect, will emerge when electrons move in Landau levels. Here we show the theoretical discovery of the fractional quantum Hall effect in the absence of Landau levels in an interacting fermion model. The non-interacting part of our Hamiltonian is the recently proposed topologically non-trivial flat-band model on a checkerboard lattice. In the presence of nearest-neighbouring repulsion, we find that at 1/3 filling, the Fermi-liquid state is unstable towards the fractional quantum Hall effect. At 1/5 filling, however, a next-nearest-neighbouring repulsion is needed for the occurrence of the 1/5 fractional quantum Hall effect when nearest-neighbouring repulsion is not too strong. We demonstrate the characteristic features of these novel states and determine the corresponding phase diagram. Sheng, D.N.; Gu, Zheng-Cheng; Sun, Kai; Sheng, L. 2011-01-01 166 National Technical Information Service (NTIS) A multiparticle theory of the Integral Quantum Hall Effect (IQHE) was constructed operating with pairs wave function as an order parameter. The IQHE is described with bosonic macroscopic states while the fractional QHE with fermionic ones. The calculation... 1985-01-01 167 SciTech Connect We review both classical and quantum potential scattering in two dimensions in a magnetic field, with applications to the quantum Hall effect. Classical scattering is complex, due to the approach of scattering states to an infinite number of dynamically bound states. Quantum scattering follows the classical behavior rather closely, exhibiting sharp resonances in place of the classical bound states. Extended scatterers provide a quantitative explanation for the breakdown of the QHE at a comparatively small Hall voltage as seen by Kawaji et al., and possibly for noise effects. Trugman, S.A. 1994-12-16 168 SciTech Connect The influence of the electron-exchange and quantum screening on the Thomson scattering process is investigated in degenerate quantum Fermi plasmas. The Thomson scattering cross section in quantum plasmas is obtained by the plasma dielectric function and fluctuation-dissipation theorem as a function of the electron-exchange parameter, Fermi energy, plasmon energy, and wave number. It is shown that the electron-exchange effect enhances the Thomson scattering cross section in quantum plasmas. It is also shown that the differential Thomson scattering cross section has a minimum at the scattering angle ?=?/2. It is also found that the Thomson scattering cross section increases with an increase of the Fermi energy. In addition, the Thomson scattering cross section is found to be decreased with increasing plasmon energy. Lee, Gyeong Won [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791 (Korea, Republic of)] [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791 (Korea, Republic of); Jung, Young-Dae [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791 (Korea, Republic of) [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791 (Korea, Republic of); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, New York 12180-3590 (United States) 2013-06-15 169 We analyze the detection of itinerant photons using a quantum nondemolition measurement. An important example is the dispersive detection of microwave photons in circuit quantum electrodynamics, which can be realized via the nonlinear interaction between photons inside a superconducting transmission line resonator. We show that the back action due to the continuous measurement imposes a limit on the detector efficiency in such a scheme. We illustrate this using a setup where signal photons have to enter a cavity in order to be detected dispersively. In this approach, the measurement signal is the phase shift imparted to an intense beam passing through a second cavity mode. The restrictions on the fidelity are a consequence of the quantum Zeno effect, and we discuss both analytical results and quantum trajectory simulations of the measurement process. Helmer, Ferdinand; Mariantoni, Matteo; Solano, Enrique; Marquardt, Florian 2009-05-01 170 PubMed We measured the angular dependence of the three recoil-proton polarization components in two-body photodisintegration of the deuteron at a photon energy of 2 GeV. These new data provide a benchmark for calculations based on quantum chromodynamics. Two of the five existing models have made predictions of polarization observables. Both explain the longitudinal polarization transfer satisfactorily. Transverse polarizations are not well described, but suggest isovector dominance. PMID:17501566 Jiang, X; Arrington, J; Benmokhtar, F; Camsonne, A; Chen, J P; Choi, S; Chudakov, E; Cusanno, F; Deur, A; Dutta, D; Garibaldi, F; Gaskell, D; Gayou, O; Gilman, R; Glashauser, C; Hamilton, D; Hansen, O; Higinbotham, D W; Holt, R J; de Jager, C W; Jones, M K; Kaufman, L J; Kinney, E R; Kramer, K; Lagamba, L; de Leo, R; Lerose, J; Lhuillier, D; Lindgren, R; Liyanage, N; McCormick, K; Meziani, Z-E; Michaels, R; Moffit, B; Monaghan, P; Nanda, S; Paschke, K D; Perdrisat, C F; Punjabi, V; Qattan, I A; Ransome, R D; Reimer, P E; Reitz, B; Saha, A; Schulte, E C; Sheyor, R; Slifer, K; Solvignon, P; Sulkosky, V; Urciuoli, G M; Voutier, E; Wang, K; Wijesooriya, K; Wojtsekhowski, B; Zhu, L 2007-05-01 171 SciTech Connect Spin Hall effect can be induced both by the extrinsic impurity scattering and by the intrinsic spin-orbit coupling in the electronic structure. The HgTe/CdTe quantum well has a quantum phase transition where the electronic structure changes from normal to inverted. We show that the intrinsic spin Hall effect of the conduction band vanishes on the normal side, while it is finite on the inverted side. This difference gives a direct mechanism to experimentally distinguish the intrinsic spin Hall effect from the extrinsic one. Yang, Wen; Chang, Kai; /Beijing, Inst. Semiconductors; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. 2010-03-19 172 It has been previously demonstrated, employing charge detection techniques, that quantum cellular automata (QCA) processes exist in the vicinity of quadruple degeneracy points in both ring and serial arrangements of lateral triple quantum dots. The effect is primarily an electrostatic one. In this paper, we report on transport measurements through a triple dot potential and study experimentally the interplay between these QCA phenomena and the Pauli (spin) blockade effect. We demonstrate experimentally that the interaction between these processes leads to a higher order and indirect form of spin blockade in which the QCA effect itself is blockaded. Gaudreau, L.; Sachrajda, A. S.; Studenikin, S. A.; Zawadzki, P.; Kam, A. 2008-03-01 173 DOEpatents A neutron rem meter utilizing proton recoil and thermal neutron scintillators to provide neutron detection and dose measurement. In using both fast scintillators and a thermal neutron scintillator the meter provides a wide range of sensitivity, uniform directional response, and uniform dose response. The scintillators output light to a photomultiplier tube that produces an electrical signal to an external neutron counter. Olsher, Richard H. (Los Alamos, NM); Seagraves, David T. (Los Alamos, NM) 2003-01-01 174 A new classical ion trajectory simulation program based on the binary collision approximation has been developed in order to support the results of time-of-flight scattering and recoiling spectrometry (TOF-SARS) and scattering and recoiling imaging spectrometry (SARIS). The code was designed to provide information directly related to the TOF-SARS and SARIS measurements and to operate efficiently on small personal computers. The calculation uses the Ziegler-Biersack-Littmark (ZBL) universal screening function or the Moliére screening function to simulate the three-dimensional motion of atomic particles and includes simultaneous collisions involving several atoms. For TOF-SARS, the program calculates the energy and time-of-flight distributions of scattered and recoiled particles, polar (incident) angle ?-scans, and azimuthal angle ?-scans. For SARIS, the program provides images of the scattering and recoiling intensities in polar exit angle and azimuthal angle (?, ?)-space. A two-dimensional reliability factor ( R) has been developed in order to obtain a quantitative comparison of experimental and simulated images. Examples of simulations are presented for Ni{100}, {110} and {111} surfaces and a Pt{111} surface. The R-factor is used to quantitatively compare the simulated Pt{111} image to an experimentally emulated image. Bykov, V.; Kim, C.; Sung, M. M.; Boyd, K. J.; Todorov, S. S.; Rabalais, J. W. 175 Nuclear recoils produced by neutrons, alphas and neutrinos as they scatter from target nuclei are important sources of background which must be considered in WIMP searches. PMTs and other detector components may contribute neutrons which generate a source of background. Alphas on the surface of the vessel can also be a serious issue for some of the experiments. And, neutrino-induced Dongming Mei; Andrew Hime; Christina Keller; Zhongbao Yin 2007-01-01 176 The review considers the peculiarities of symmetry breaking and symmetry transformations and the related physical effects in finite quantum systems. Some types of symmetry in finite systems can be broken only asymptotically. However, with a sufficiently large number of particles, crossover transitions become sharp, so that symmetry breaking happens similarly to that in macroscopic systems. This concerns, in particular, global gauge symmetry breaking, related to Bose-Einstein condensation and superconductivity, or isotropy breaking, related to the generation of quantum vortices, and the stratification in multicomponent mixtures. A special type of symmetry transformation, characteristic only for finite systems, is the change of shape symmetry. These phenomena are illustrated by the examples of several typical mesoscopic systems, such as trapped atoms, quantum dots, atomic nuclei, and metallic grains. The specific features of the review are: (i) the emphasis on the peculiarities of the symmetry breaking in finite mesoscopic systems; (ii) the analysis of common properties of physically different finite quantum systems; (iii) the manifestations of symmetry breaking in the spectra of collective excitations in finite quantum systems. The analysis of these features allows for the better understanding of the intimate relation between the type of symmetry and other physical properties of quantum systems. This also makes it possible to predict new effects by employing the analogies between finite quantum systems of different physical nature. Birman, J. L.; Nazmitdinov, R. G.; Yukalov, V. I. 2013-05-01 177 We propose nano-optical antennas with asymmetric radiation patterns as light-driven mechanical recoil force generators. Directional antennas are found to generate recoil force efficiently when driven in the spectral proximity of their resonances. It is also shown that the recoil force is equivalent to the Poynting vector integrated over a closed sphere containing the antenna structures. Song, Jung-Hwan; Shin, Jonghwa; Lim, Hee-Jin; Lee, Yong-Hee 2011-08-01 178 SciTech Connect We investigate the anti-Zeno phenomenon as well as the quantum Zeno effect for the irreversible quantum tunneling from a quantum dot to a ring array of quantum dots. By modeling the total system with the Anderson-Fano-Lee model, it is found that the transition from the quantum Zeno to the quantum anti-Zeno effect can happen by adjusting magnetic flux and gate voltage. Zhou Lan [Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, 100080 (China); Department of Physics, Hunan Normal University, Changsha 410081 (China); Hu, F. M. [Department of Mathematics, Capital Normal University, Beijing, 100037 (China); Lu Jing [Department of Physics, Hunan Normal University, Changsha 410081 (China); Sun, C. P. [Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, 100080 (China) 2006-09-15 179 SciTech Connect Within the effective average action approach to quantum gravity, we recover the low-energy effective action as derived in the effective field theory framework by studying the flow of possibly nonlocal form factors that appear in the curvature expansion of the effective average action. We restrict to the one-loop flow where progress can be made with the aid of the nonlocal heat kernel expansion. We discuss the possible physical implications of the scale-dependent low-energy effective action through the analysis of the quantum corrections to the Newtonian potential. Satz, A.; Mazzitelli, F. D. [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Univ. de Buenos Aires and Instituto de Fisica de Buenos Aires, CONICET Ciudad Universitaria, Pabellon 1, 1428 Buenos Aires (Argentina); Codello, A. [Institut fuer Physik, Johannes Gutenberg-Universitaet, Mainz Staudingerweg 7, D-55099 Mainz (Germany) 2010-10-15 180 PubMed The laws of thermodynamics apply equally well to quantum systems as to classical systems, and because of this, quantum effects do not change the fundamental thermodynamic efficiency of isothermal refrigerators or engines. We show that, despite this fact, quantum mechanics permits measurement-based feedback control protocols that are more thermodynamically efficient than their classical counterparts. As part of our analysis, we perform a detailed accounting of the thermodynamics of unitary feedback control and elucidate the sources of inefficiency in measurement-based and coherent feedback. PMID:24827219 Horowitz, Jordan M; Jacobs, Kurt 2014-04-01 181 The laws of thermodynamics apply equally well to quantum systems as to classical systems, and because of this, quantum effects do not change the fundamental thermodynamic efficiency of isothermal refrigerators or engines. We show that, despite this fact, quantum mechanics permits measurement-based feedback control protocols that are more thermodynamically efficient than their classical counterparts. As part of our analysis, we perform a detailed accounting of the thermodynamics of unitary feedback control and elucidate the sources of inefficiency in measurement-based and coherent feedback. Horowitz, Jordan M.; Jacobs, Kurt 2014-04-01 182 Quantum dot hybrid qubits formed from three electrons in double quantum dots represent a promising compromise between high speed and simple fabrication for solid state implementations of single-qubit and two-qubits quantum logic ports. We derive the Schrieffer-Wolff effective Hamiltonian that describes in a simple and intuitive way the qubit by combining a Hubbard-like model with a projector operator method. As a result, the Hubbard-like Hamiltonian is transformed in an equivalent expression in terms of the exchange coupling interactions between pairs of electrons. The effective Hamiltonian is exploited to derive the dynamical behavior of the system and its eigenstates on the Bloch sphere to generate qubits operation for quantum logic ports. A realistic implementation in silicon and the coupling of the qubit with a detector are discussed. Ferraro, E.; De Michielis, M.; Mazzeo, G.; Fanciulli, M.; Prati, E. 2014-05-01 183 We introduce an atomistic approach to the dissipative quantum dynamics of charged or neutral excitations propagating through macromolecular systems. Using the Feynman-Vernon path integral formalism, we analytically trace out from the density matrix the atomic coordinates and the heat bath degrees of freedom. This way we obtain an effective field theory which describes the real-time evolution of the quantum excitation and is fully consistent with the fluctuation-dissipation relation. The main advantage of the field-theoretic approach is that it allows us to avoid using the Keldysh contour formulation. This simplification makes it straightforward to derive Feynman diagrams to analytically compute the effects of the interaction of the propagating quantum excitation with the heat bath and with the molecular atomic vibrations. For illustration purposes, we apply this formalism to investigate the loss of quantum coherence of holes propagating through a poly(3-alkylthiophene) polymer. Schneider, E.; a Beccara, S.; Faccioli, P. 2013-08-01 184 To study dissipative quantum transport in ultra-scaled devices, we first solve the Pauli Master Equation using the Effective Mass Approximation, followed by solving ballistic quantum transport using the full band structure determined from the empirical pseudopotential method. We study the geometry induced quantum access resistance, evaluate the influence of non-polar phonon scattering, and calculate impurity scattering in devices such as n-i-n resistor, Double-Barrier Resonant Tunneling Diode, Double-Gate Field Effect Transistors. We calculate band structure and the complex band structure of Silicon Nanowires, develop open boundary conditions for full band quantum transport using the empirical pseudopotential method, and perform atomistic modeling of Silicon Nanowire structures to study electron transport characteristics. Fu, Bo 185 We study the non-Markovian effect on the dynamics of the quantum discord by exactly solving a model consisting of two independent qubits subject to two zero-temperature non-Markovian reservoirs, respectively. Considering the two qubits initially prepared in Bell-like or extended Werner-like states, we show that there is no occurrence of the sudden death, but only instantaneous disappearance of the quantum discord Bo Wang; Zhen-Yu Xu; Ze-Qian Chen; Mang Feng 2010-01-01 186 Quantum dots continue to be an area of intense scientific activity, because they have a number of advantages as the building blocks' for advanced semiconductor devices with three-dimensional band-structure engineering. Considerable effort is being devoted to the investigation of effects due to the exciton-phonon interaction on the optical properties of quantum dots. Our theory of photoluminescence and Raman scattering in J. T. Devreese 2002-01-01 187 Quantum electrodynamics theory of heavy ions and atoms is considered. The current status of calculations of the binding energies, the hyperfine splitting and g factor values in heavy few-electron ions is reviewed. The theoretical predictions are compared with available experimental data. A special attention is focused on tests of quantum electrodynamics in strong electromagnetic fields and on determination of the fundamental constants. Recent progress in calculations of the parity nonconservation effects with heavy atoms and ions is also reported. Shabaev, V. M.; Andreev, O. V.; Bondarev, A. I.; Glazov, D. A.; Kozhedub, Y. S.; Maiorova, A. V.; Plunien, G.; Tupitsyn, I. I.; Volotka, A. V. 2011-05-01 188 Quantum effects on the initial singularity of the Gowdy T(-cubed) x R cosmology are studied. This is done by calculating the expectation values of the curvature invariant operator in suitable quantum states. It is found that eigenstates of 'particle' number do not introduce inhomogeneities into the model whereas linear combinations of these states such as the coherent states do. It is also found that the classical singularity persists. Husain, Viqar 1987-11-01 189 The vacuum expectation values of the energy-momentum tensor of quantized scalar and spinor fields in a de Sitter space of the first kind are calculated. Limiting cases of the obtained exact expressions are considered. It is noted that the de Sitter space is a self-consistent solution of the Einstein equations with allowance for quantum vacuum fluctuations of massless fields. Mamaev, S. G. 1981-01-01 190 We compute the normalization of the form factor entering the decay amplitude by using numerical simulations of QCD on the lattice. From our study with dynamical light quarks, and by employing the maximally twisted Wilson quark action, we obtain in the continuum limit . We also compute the scalar and tensor form factors in the region near zero recoil and find , , for . The latter results are useful for searching the effects of physics beyond the Standard Model in decays. Our results for the similar form factors relevant to the non-strange case indicate that the method employed here can be used to achieve the precision determination of the decay amplitude as well. Atoui, Mariam; Morénas, Vincent; Be?irevi?, Damir; Sanfilippo, Francesco 2014-05-01 191 We model the gravitational collapse of heavy massive shells including its main quantum corrections. Among these corrections, quantum improvements coming from Quantum Einstein Gravity are taken into account, which provides us with an effective quantum spacetime. Likewise, we consider dynamical Hawking radiation by modeling its back-reaction once the horizons have been generated. Our results point towards a picture of gravitational collapse in which the collapsing shell reaches a minimum non-zero radius (whose value depends on the shell initial conditions) with its mass only slightly reduced. Then, there is always a rebound after which most (or all) of the mass evaporates in the form of Hawking radiation. Since the mass never concentrates in a single point, no singularity appears. Torres, R.; Fayos, F. 2014-06-01 192 We consider a triple quantum dot system in a triangular geometry with one of the dots connected to metallic leads. Using Wilson's numerical renormalization group method, we investigate quantum entanglement and its relation to the thermodynamic and transport properties in the regime where each of the dots is singly occupied on average, but with non-negligible charge fluctuations. It is shown that even in the regime of significant charge fluctuations the formation of the Kondo singlets induces switching between separable and perfectly entangled states. The quantum phase transition between unentangled and entangled states is analyzed quantitatively and the corresponding phase diagram is explained by exactly solvable spin model. In the framework of an effective model we also explain smearing of the entanglement transition for cases when the symmetry of the triple quantum dot system is relaxed. Tooski, S. B.; Bu?ka, Bogdan R.; Žitko, Rok; Ramšak, Anton 2014-06-01 193 Size and shape effects in electromagnetic response of quantum dots (QDs) such as depolarization shift of the exciton resonance and fine structure of the gain band are considered on the basis of a unified concept of light confinement. We show that at sufficiently large oscillator strength of the transition, QD behaves itself as a microcavity and excitation of cavity eigenmodes S. A Maksimenko; G. Ya Slepyan; V. P Kalosha; N. N Ledentsov; A Hoffmann; D Bimberg 2001-01-01 194 Laughlin's theory of fractional charges is worked out in detail for small charges from 1\\/3 till 1\\/101. There is a small deviation between computed values and those obtained from the closed form expression. The ground state energy crosses that of the charge-density waves. We develop a theory of fractional charges by using the quantum mechanics of angular momentum. We find Keshav N. Shrivastava 2008-01-01 195 \\u000a The progress of computational chemistry in the treatment of liquid systems is outlined, and the combination of the statistical\\u000a methods (in particular molecular dynamics) with quantum mechanics as the main foundation of this progress is emphasised. The\\u000a difficulties of experimental studies of liquid systems without having obtained sophisticated theoretical models describing\\u000a the structural entities and the dynamical behaviour of these THOMAS S. HOFER; BERNHARD R. RANDOLF; BERND M. RODE 196 The two-time correlation function Cxx(t,t?) of the displacement x(t)?x(t0) of a free quantum Brownian particle with respect to its position at a given time t0 is calculated analytically in the framework of the Caldeira and Leggett ohmic dissipation model. As a result, at any temperature T,Cxx(t,t?) exhibits aging, i.e. it depends explicitly on both times t and t? and not Noëlle Pottier; Alain Mauger 2000-01-01 197 SciTech Connect The phenomenon of quantum interrogation allows one to optically detect the presence of an absorbing object, without the measuring light interacting with it. In an application of the quantum Zeno effect, the object inhibits the otherwise coherent evolution of the light, such that the probability that an interrogating photon is absorbed can in principle be arbitrarily small. We have implemented this technique, achieving efficiencies of up to 73% , and consequently exceeding the 50% theoretical maximum of the original ''interaction-free'' measurement proposal. We have also predicted and experimentally verified a previously unsuspected dependence on loss. (c) 1999 The American Physical Society. Kwiat, P. G. [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); White, A. G. [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mitchell, J. R. [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)] [Physics Division, P-23, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Nairz, O. [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria)] [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria); Weihs, G. [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria)] [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria); Weinfurter, H. [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria)] [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria); Zeilinger, A. [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria)] [Institute for Experimental Physics, University of Innsbruck, Innsbruck 6020, (Austria) 1999-12-06 198 The molecular recoiling force stemming from nonequilibrium chain conformation was found to play a very important role in the dewetting stability of polymer thin films. Correct measurements and inclusion of this molecular force into thermodynamic consideration are crucial for analyzing dewetting phenomena and nanoscale polymer chain physics. This force was measured using a simple method based on contour relaxation at the incipient dewetting holes. The recoiling stress was found to increase dramatically with molecular weight and decreasing film thickness. The corresponding forces were calculated to be in the range from 9.0 to 28.2mN/m, too large to be neglected when compared to the dispersive forces (˜10mN/m) commonly operative in thin polymer films. Yang, M. H.; Hou, S. Y.; Chang, Y. L.; Yang, A. C.-M. 2006-02-01 199 SciTech Connect In this paper we study the possibility of modifying the dynamics of both quantum correlations, such as entanglement and discord, and classical correlations of an open bipartite system by means of the quantum Zeno effect. We consider two qubits coupled to a common boson reservoir at zero temperature. This model describes, for example, two atoms interacting with a quantized mode of a lossy cavity. We show that when the frequencies of the two atoms are symmetrically detuned from that of the cavity mode, oscillations between the Zeno and anti-Zeno regimes occur. We also calculate analytically the time evolution of both classical correlations and quantum discord, and we compare the Zeno dynamics of entanglement with the Zeno dynamics of classical correlations and discord. Francica, F.; Plastina, F. [Dipartimento di Fisica, Universita della Calabria, I-87036 Arcavacata di Rende (Italy); INFN-Gruppo Collegato di Cosenza, Cosenza (Italy); Maniscalco, S. [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FIN-20014 Turun yliopisto (Finland) 2010-11-15 200 Systems of solitons are approximately described in terms of a finite number of effective degrees of freedom'' interacting via effective potentials''. These are reconstructed, principally from knowledge of solutions to the classical field equations, by a procedure involving the sometric mapping of a sector of the field theoretical Hilbert space onto the Hilbert space of non-relativistic point particles. The quantum P. Vinciarelli 1976-01-01 201 Theoretical and experimental studies on the ratchet effects in graphene and in quantum wells with a lateral superlattice excited by alternating electric fields of terahertz frequency range are presented. We discuss the Seebeck ratchet effect and helicity driven photocurrents and show that the photocurrent generation is based on the combined action of a spatially periodic in-plane potential and a spatially modulated light. Golub, L. E.; Nalitov, A. V.; Ivchenko, E. L.; Olbrich, P.; Kamann, J.; Eroms, J.; Weiss, D.; Ganichev, S. D. 2013-12-01 202 Directional-solidification experiments are often perturbed by the nucleation of gas bubbles and other residual-impurity effects. We present a detailed experimental study of these phenomena in the system CBr{4}-C2Cl6 directionally solidified in thin films. As is usual in this type of experiments, we use zone-refined and outgased products, but do not fill and seal the samples under vacuum. We study the Silvère Akamatsu; Gabriel Faivre 1996-01-01 203 The role of quantum mechanics in biological organisms has been a fundamental question of twentieth-century biology. It is only now, however, with modern experimental techniques, that it is possible to observe quantum mechanical effects in bio-molecular complexes directly. Indeed, recent experiments have provided evidence that quantum effects such as wave-like motion of excitonic energy flow, delocalization and entanglement can be seen even in complex and noisy biological environments (Engel et al 2007 Nature 446 782; Collini et al 2010 Nature 463 644; Panitchayangkoon et al 2010 Proc. Natl Acad. Sci. USA 107 12766). Motivated by these observations, theoretical work has highlighted the importance of an interplay between environmental noise and quantum coherence in such systems (Mohseni et al 2008 J. Chem. Phys. 129 174106; Plenio and Huelga 2008 New J. Phys. 10 113019; Olaya-Castro et al 2008 Phys. Rev. B 78 085115; Rebentrost et al 2009 New J. Phys. 11 033003; Caruso et al 2009 J. Chem. Phys. 131 105106; Ishizaki and Fleming 2009 J. Chem. Phys. 130 234111). All of this has led to a surge of interest in the exploration of quantum effects in biological systems in order to understand the possible relevance of non-trivial quantum features and to establish a potential link between quantum coherence and biological function. These studies include not only exciton transfer across light harvesting complexes, but also the avian compass (Ritz et al 2000 Biophys. J. 78 707), and the olfactory system (Turin 1996 Chem. Sens. 21 773; Chin et al 2010 New J. Phys. 12 065002). These examples show that the full understanding of the dynamics at bio-molecular length (10 Å) and timescales (sub picosecond) in noisy biological systems can uncover novel phenomena and concepts and hence present a fertile ground for truly multidisciplinary research. Fleming, G. R.; Huelga, S. F.; Plenio, M. B. 2011-11-01 204 PubMed Central Geometric phases in quantum mechanics play an extraordinary role in broadening our understanding of fundamental significance of geometry in nature. One of the best known examples is the Berry phase [M.V. Berry (1984), Proc. Royal. Soc. London A, 392:45], which naturally emerges in quantum adiabatic evolution. So far the applicability and measurements of the Berry phase were mostly limited to systems of weakly interacting quasi-particles, where interference experiments are feasible. Here we show how one can go beyond this limitation and observe the Berry curvature, and hence the Berry phase, in generic systems as a nonadiabatic response of physical observables to the rate of change of an external parameter. These results can be interpreted as a dynamical quantum Hall effect in a parameter space. The conventional quantum Hall effect is a particular example of the general relation if one views the electric field as a rate of change of the vector potential. We illustrate our findings by analyzing the response of interacting spin chains to a rotating magnetic field. We observe the quantization of this response, which we term the rotational quantum Hall effect. Gritsev, V.; Polkovnikov, A. 2012-01-01 205 PubMed Using a fully quantum mechanical approach we study the optical response of a strongly coupled metallic nanowire dimer for variable separation widths of the junction between the nanowires. The translational invariance of the system allows to apply the time-dependent density functional theory (TDDFT) for nanowires of diameters up to 10 nm which is the largest size considered so far in quantum modeling of plasmonic dimers. By performing a detailed analysis of the optical extinction, induced charge densities, and near fields, we reveal the major nonlocal quantum effects determining the plasmonic modes and field enhancement in the system. These effects consist mainly of electron tunneling between the nanowires at small junction widths and dynamical screening. The TDDFT results are compared with results from classical electromagnetic calculations based on the local Drude and non-local hydrodynamic descriptions of the nanowire permittivity, as well as with results from a recently developed quantum corrected model. The latter provides a way to include quantum mechanical effects such as electron tunneling in standard classical electromagnetic simulations. We show that the TDDFT results can be thus retrieved semi-quantitatively within a classical framework. We also discuss the shortcomings of classical non-local hydrodynamic approaches. Finally, the implications of the actual position of the screening charge density at the gap interfaces are discussed in connection with plasmon ruler applications at subnanometric distances. PMID:24216954 Teperik, Tatiana V; Nordlander, Peter; Aizpurua, Javier; Borisov, Andrei G 2013-11-01 206 SciTech Connect Properties of quantum phase transitional systems in atomic nuclei are explored within the context of the interacting boson model 1 for both first and second order systems. A traditionally experimental approach is used to search for the effective finite-size critical point as a function of system size and angular momentum by studying derivatives of observables across the phase transition region. The effects of angular momentum on quantum phase transitions are investigated, and properties of first order phase transitions within the Casten triangle are examined. Williams, E.; Casperson, R. J.; Werner, V. [A. W. Wright Nuclear Structure Laboratory, Yale University, New Haven, Connecticut 06520 (United States) 2010-11-15 207 We investigate all pure quantum-electrodynamics corrections to the np?1s, n=2-4 transition energies of pionic hydrogen larger than 1meV, which requires an accurate evaluation of all relevant contributions up to order ?5. These values are needed to extract an accurate strong interaction shift from experiment. Many small effects, such as second-order and double vacuum polarization contribution, proton and pion self-energies, finite size and recoil effects are included with exact mass dependence. Our final value differs from previous calculations by up to ?11ppm for the 1s state, while a recent experiment aims at a 4ppm accuracy. Schlesser, S.; Le Bigot, E.-O.; Indelicato, P.; Pachucki, K. 2011-07-01 208 When interacting two-dimensional electrons are placed in a large perpendicular magnetic field, to minimize their energy, they capture an even number of flux quanta and create new particles called composite fermions (CFs). These complex electron-flux-bound states offer an elegant explanation for the fractional quantum Hall effect. Thanks to the flux attachment, the effective field vanishes at half-filled Landau levels (?= 1/2 and 3/2) and CFs exhibit Fermi-liquid-like properties, similar to their zero-field electron counterparts. Here, we study a two-dimensional electron system in AlAs quantum wells where the electrons occupy two conduction band valleys with anisotropic Fermi contours and strain-tunable occupation. We address a fundamental question whether the anisotropy of the electron effective mass and Fermi surface is transferred to the CFs formed around filling factors ?= 1/2 and 3/2. Similar to their electron counter parts, CFs also exhibit anisotropic transport, suggesting an anisotropy of CF effective mass and Fermi surface. We also study quantum Hall ferromagnetism for fractional quantum Hall states formed at ?= 1/3 and 5/3 as a function of valley splitting. Within the framework of the CF theory, electronic fractional filling factors ?= 1/3 and 5/3 are equivalent to the integer filling factor p= 1 of CFs. Reminiscent of the quantum Hall ferromagnetism observed at ?= 1, we report persistent fractional quantum Hall states at filling factors ?= 1/3 and 5/3 when the two valleys are degenerate. However, the comparison of the energy gaps measured at ?= 1/3 and 5/3 to the available theory developed for single-valley, two-spin systems reveals that the gaps and their rates of rise with strain are much smaller than predicted.[4pt] [1] Transference of Transport Anisotropy to Composite Fermions,'' T. Gokmen, M. Padmanabhan, and M. Shayegan, Nature Physics 6, 621-624 (2010).[4pt] [2] Ferromagnetic Fractional Quantum Hall States in a Valley-Degenerate Two-Dimensional Electron System,'' M. Padmanabhan, T. Gokmen, and M. Shayegan, Phys. Rev. Lett. 104, 016805 (2010). Gokmen, Tayfun 2013-03-01 209 Cold atoms in optical high-Q cavities are an ideal model system for long-range interacting particles. The position of two arbitrary atoms is, independent on their distance, coupled by the back-scattering of photons within the cavity. This mutual coupling can lead to collective instability and self-organization of a cloud of cold atoms interacting with the cavity fields. This phenomenon (CARL, i.e. Collective Atomic Recoil Lasing) has been discussed theoretically for years, but was observed only recently in our lab. The CARL-effect is closely linked to superradiant Rayleigh scattering, which has been intensely studied with Bose-Einstein condensates in free space. By adding a resonator the coherence time of the system, in which the instability occurs, can be strongly enhanced. This enables us to observe cavity-enhanced superradiance with both Bose-Einstein condensates and thermal clouds and allows us to close the discussion about the role of quantum statistics in superradiant scattering. Slama, Sebastian; Krenz, Gordon; Bux, Simone; Zimmermann, Claus; Courteille, Philippe W. 2008-01-01 210 The fractional quantum Hall effect occurs when an extremely clean 2-dimensional fermion gas is subject to a magnetic field. This simple set of circumstances creates phenomena, such as edge reconstruction and fractional statistics, that remain subjects of experimental study 30 years after the discovery of the fractional quantum Hall effect. This thesis investigates the properties of excitations of the fractional quantum Hall effect. The first set of experiments studies the interaction between fractional quantum Hall quasiparticles and nuclei in a quantum point contact (QPC). Following the application of a DC bias, fractional plateaus in the QPC shift symmetrically about half filling of the lowest Landau level, nu = 1/3, suggesting an interpretation in terms of composite fermions. Mapping the effects from the integer to fractional regimes extends the composite fermion picture to include hyperfine coupling. The second set of experiments studies the tunneling of quasiparticles through an antidot in the integer and fractional quantum Hall effect. In the integer regime, we conclude that oscillations are of the Coulomb type from the scaling of magnetic field period with the number of edges bound to the antidot. Generalizing this picture to the fractional regime, we find (based on magnetic field and gate-voltage periods) at nu = 2/3 a tunneling charge of (2/3)e and a single charged edge. Further unpublished data related to this experiment as well as alternative theoretical explanations are also presented. The third set of experiments investigates the properties of the fractional quantum Hall effect in the lowest Landau level of bilayer graphene using a scanning single-electron transistor. We observe a sequence of states which breaks particle-hole symmetry and instead obeys a nu ? nu + 2 symmetry. This asymmetry highlights the importance of the orbital degeneracy for many-body states in bilayer graphene. The fourth set of experiments investigates the coupling between microwaves and the fractional quantum Hall effect. Reflectometry is used to investigate bulk properties of samples with different electron densities. We observe large changes in the amplitude of the reflected signal at each integer filling factor as well as changes in the capacitance of the system. Kou, Angela 211 PubMed Electronic properties such as current flow are generally independent of the electron's spin angular momentum, an internal degree of freedom possessed by quantum particles. The spin Hall effect, first proposed 40 years ago, is an unusual class of phenomena in which flowing particles experience orthogonally directed, spin-dependent forces--analogous to the conventional Lorentz force that gives the Hall effect, but opposite in sign for two spin states. Spin Hall effects have been observed for electrons flowing in spin-orbit-coupled materials such as GaAs and InGaAs (refs 2, 3) and for laser light traversing dielectric junctions. Here we observe the spin Hall effect in a quantum-degenerate Bose gas, and use the resulting spin-dependent Lorentz forces to realize a cold-atom spin transistor. By engineering a spatially inhomogeneous spin-orbit coupling field for our quantum gas, we explicitly introduce and measure the requisite spin-dependent Lorentz forces, finding them to be in excellent agreement with our calculations. This 'atomtronic' transistor behaves as a type of velocity-insensitive adiabatic spin selector, with potential application in devices such as magnetic or inertial sensors. In addition, such techniques for creating and measuring the spin Hall effect are clear prerequisites for engineering topological insulators and detecting their associated quantized spin Hall effects in quantum gases. As implemented, our system realizes a laser-actuated analogue to the archetypal semiconductor spintronic device, the Datta-Das spin transistor. PMID:23739329 Beeler, M C; Williams, R A; Jiménez-García, K; LeBlanc, L J; Perry, A R; Spielman, I B 2013-06-13 212 SciTech Connect The semi-inclusive deep-inelastic scattering of electrons off {sup 2}H and {sup 3}He with detection of slow protons and deuterons, respectively, i.e., the processes {sup 2}H(e,e{sup '}p)X and {sup 3}He(e,e{sup '}d)X, are calculated within the spectator mechanism, taking into account the final state interaction of the nucleon debris with the detected protons and deuterons. It is shown that by a proper choice of the kinematics the origin of the EMC effect and the details of the interaction between the hadronizing quark and the nuclear medium can be investigated at a level which cannot be reached by inclusive deep-inelastic scattering. A comparison of the results of our calculations, containing no adjustable parameters, with recently available experimental data on the process {sup 2}H(e,e{sup '}p)X shows a good agreement in the backward hemisphere of the emitted nucleons. Theoretical predictions at energies that will be available at the upgraded Thomas Jefferson National Accelerator Facility are presented, and the possibility to investigate the proposed semi-inclusive processes at electron-ion colliders is briefly discussed. Ciofi degli Atti, C.; Kaptari, L. P. [Department of Physics, University of Perugia, Piazza dell' Universita 1, I-06123 Perugia (Italy) and Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, Via A. Pascoli, I-06123 Perugia (Italy) 2011-04-15 213 The semi-inclusive deep-inelastic scattering of electrons off H2 and He3 with detection of slow protons and deuterons, respectively, i.e., the processes 2H(e,e'p)X and 3He(e,e'd)X, are calculated within the spectator mechanism, taking into account the final state interaction of the nucleon debris with the detected protons and deuterons. It is shown that by a proper choice of the kinematics the origin of the EMC effect and the details of the interaction between the hadronizing quark and the nuclear medium can be investigated at a level which cannot be reached by inclusive deep-inelastic scattering. A comparison of the results of our calculations, containing no adjustable parameters, with recently available experimental data on the process 2H(e,e'p)X shows a good agreement in the backward hemisphere of the emitted nucleons. Theoretical predictions at energies that will be available at the upgraded Thomas Jefferson National Accelerator Facility are presented, and the possibility to investigate the proposed semi-inclusive processes at electron-ion colliders is briefly discussed. Ciofi Degli Atti, C.; Kaptari, L. P. 2011-04-01 214 We theoretically investigate the finite size effect in quantum anomalous Hall (QAH) system. Using Mn-doped HgTe quantum well as an example, we demonstrate that the coupling between the edge states is spin dependent and is related not only to the distance between the edges but also to the doping concentration. Thus with proper tuning of the two, we can get four kinds of transport regimes: quantum spin Hall, QAH, edge conducting, and normal insulator. These transport regimes have distinguishing edge conducting properties while the bulk is insulting. Our results give a general picture of the finite size effect in a QAH system, and are important for the transport experiments in QAH nanomaterials as well as future device applications. Fu, Hua-Hua; Lü, Jing-Tao; Gao, Jin-Hua 2014-05-01 215 PubMed We establish a quantum Otto engine cycle in which the working substance contacts with squeezed reservoirs during the two quantum isochoric processes. We consider two working substances: (1) a qubit and (2) two coupled qubits. Due to the effects of squeezing, the working substance can be heated to a higher effective temperature, which leads to many interesting features different from the ordinary ones, such as (1) for the qubit as working substance, if we choose the squeezed parameters properly, the positive work can be exported even when T(H) quantum fuel is more efficient than the classical one. PMID:23214736 Huang, X L; Wang, Tao; Yi, X X 2012-11-01 216 National Technical Information Service (NTIS) An overview on the theoretic formalism and up-to-date applications in quantum condensed matter physics of the effective potential and effective hamiltonian methods is given. The main steps of their unified derivation by the pure-quantum self-consistent ha... A. Cuccoli V. Tognetti R. Vaia P. Verrucchi 1996-01-01 217 The last years have witnessed fast growing developments in the use of quantum mechanics in technology-oriented and information-related fields, especially in metrology, in the developments of nano-devices and in understanding highly efficient transport processes. The consequent theoretical and experimental outcomes are now driving new experimental tests of quantum mechanical effects with unprecedented accuracies that carry with themselves the concrete possibility of novel technological spin-offs. Indeed, the manifold advances in quantum optics, atom and ion manipulations, spintronics and nano-technologies are allowing direct experimental verifications of new ideas and their applications to a large variety of fields. All of these activities have revitalized interest in quantum mechanics and created a unique framework in which theoretical and experimental physics have become fruitfully tangled with information theory, computer, material and life sciences. This special issue aims to provide an overview of what is currently being pursued in the field and of what kind of theoretical reference frame is being developed together with the experimental and theoretical results. It consists of three sections: 1. Memory effects in quantum dynamics and quantum channels 2. Driven open quantum systems 3. Experiments concerning quantum coherence and/or decoherence The first two sections are theoretical and concerned with open quantum systems. In all of the above mentioned topics, the presence of an external environment needs to be taken into account, possibly in the presence of external controls and/or forcing, leading to driven open quantum systems. The open system paradigm has proven to be central in the analysis and understanding of many basic issues of quantum mechanics, such as the measurement problem, quantum communication and coherence, as well as for an ever growing number of applications. The theory is, however, well-settled only when the so-called Markovian or memoryless, approximation applies. When strong coupling or long environmental relaxation times make memory effects important for a realistic description of the dynamics, new strategies are asked for and the assessment of the general structure of non-Markovian dynamical equations for realistic systems is a crucial issue. The impact of quantum phenomena such as coherence and entanglement in biology has recently started to be considered as a possible source of the high efficiency of certain biological mechanisms, including e.g. light harvesting in photosynthesis and enzyme catalysis. In this effort, the relatively unknown territory of driven open quantum systems is being explored from various directions, with special attention to the creation and stability of coherent structures away from thermal equilibrium. These investigations are likely to advance our understanding of the scope and role of quantum mechanics in living systems; at the same time they provide new ideas for the developments of next generations of devices implementing highly efficient energy harvesting and conversion. The third section concerns experimental studies that are currently being pursued. Multidimensional nonlinear spectroscopy, in particular, has played an important role in enabling experimental detection of the signatures of coherence. Recent remarkable results suggest that coherence—both electronic and vibrational—survive for substantial timescales even in complex biological systems. The papers reported in this issue describe work at the forefront of this field, where researchers are seeking a detailed understanding of the experimental signatures of coherence and its implications for light-induced processes in biology and chemistry. Benatti, Fabio; Floreanini, Roberto; Scholes, Greg 2012-08-01 218 A discussion about the quantum mechanical effects on noise properties of ballistic (phase-coherent) nanoscale devices is presented. It is shown that quantum noise can be understood in terms of quantum trajectories. This interpretation provides a simple and intuitive explanation of the origin of quantum noise that can be very salutary for nanoelectronic engineers. In particular, an injection model is presented Xavier Oriols 2003-01-01 219 We discuss cosmological effects of the quantum loops of massless particles, which lead to temporal nonlocalities in the equations of motion governing the scale factor a(t). For the effects discussed here, loops cause the evolution of a(t) to depend on the memory of the curvature in the past with a weight that scales initially as 1/(t -t'). As one of our primary examples, we discuss the situation with a large number of light particles, such that these effects occur in a region where gravity may still be treated classically. However, we also describe the effect of quantum graviton loops and the full set of Standard Model particles. We show that these effects decrease with time in an expanding phase, leading to classical behavior at late time. In a contracting phase, within our approximations the quantum results can lead to a bouncelike behavior at scales below the Planck mass, avoiding the singularities required classically by the Hawking-Penrose theorems. For conformally invariant fields, such as the Standard Model with a conformally coupled Higgs, this result is purely nonlocal and parameter independent. Donoghue, John F.; El-Menoufi, Basem Kamal 2014-05-01 220 PubMed We use numerical simulations to investigate the spin Hall effect in quantum wires in the presence of both Rashba and Dresselhaus spin-orbit coupling. We find that the intrinsic spin Hall effect is highly anisotropic with respect to the orientation of the wire, and that the nature of this anisotropy depends strongly on the electron density and the relative strengths of the Rashba and Dresselhaus spin-orbit couplings. In particular, at low densities, when only one subband of the quantum wire is occupied, the spin Hall effect is strongest for electron momentum along the [N110] axis, which is the opposite of what is expected for the purely 2D case. In addition, when more than one subband is occupied, the strength and anisotropy of the spin Hall effect can vary greatly over relatively small changes in electron density, which makes it difficult to predict which wire orientation will maximize the strength of the spin Hall effect. These results help to illuminate the role of quantum confinement in spin-orbit-coupled systems, and can serve as a guide for future experimental work on the use of quantum wires for spin-Hall-based spintronic applications. PMID:22052818 Cummings, A W; Akis, R; Ferry, D K 2011-11-23 221 The theoretical aspects of the effect of multiple exciton generation (MEG) in quantum dots (QDs) have been analysed in this work. The statistical theory of MEG in QDs based on Fermi's approach is presented, taking into account the momentum conservation law. According to Fermi this approach should give the ultimate quantum efficiencies of multiple particle generation. The microscopic mechanism of this effect is based on the theory of electronic "shaking". According to this approach, the wave function of "shaking" electrons can be selected as Plato's functions with effective charges depending on the number of generated excitons. From the theory it is known increasing the number of excitons leads to enhancement of the Auger recombination of electrons which results in reduced quantum yields of excitons. The deviation of the averaged multiplicity of the MEG effect from the Poisson law of fluctuations has been investigated on the basis of synergetics approaches. In addition the role of interface electronic states of QDs and ligands has been considered by means of quantum mechanical approaches. The size optimisation of QDs has been performed to maximise the multiplicity of the MEG effect. Oksengendler, B. L.; Turaeva, N. N.; Rashidova, S. S. 2012-06-01 222 According to recent discoveries, the large-scale universe is highly isotropic and homogeneous. It is pointed out that the Friedman model with a homogeneous and isotropic Robertson-Walker metric provides the best description of the present universe. It is considered to be the main drawback of the model that it assumes a perfectly symmetric state from the beginning. The present investigation is concerned with a possible approach to the problem of making an initially inhomogeneous metric homogeneous. In the inhomogeneous conformally flat and spherically symmetric metric with the Bondi-type energy-momentum tensor, an additional quantum source is considered. Trace anomaly analysis for the massless scalar field makes it possible to calculate the vacuum expectation of the energy tensor for that field. Siemieniec-Ozieblo, G. 1984-03-01 223 Quantum diffusion equations with time-dependent transport coefficients are derived from generalized non-Markovian Langevin equations. Generalized fluctuation-dissipation relations and analytical formulas for calculating friction and diffusion coefficients in nuclear processes are obtained. The asymptotics of the transport coefficients and of the correlation functions are investigated. The problem of correlation decay in quantum dissipative systems is studied. A comparative analysis of diffusion coefficients for the harmonic and inverted oscillators is performed. The role of quantum statistical effects during passage through a parabolic potential barrier is investigated. Sets of diffusion coefficient assuring the purity of states at any time instant are found in cases of non-Markovian dynamics. The influence of different sets of transport coefficients on the rate of decay from a metastable state is studied in the framework of the master equation for reduced density matrices describing open quantum systems. The approach developed is applied to investigation of fission processes and the processes of projectile-nuclei capture by target nuclei for bombarding energies in the vicinity of the Coulomb barrier. The influence of dissipation and fluctuation on these processes is taken into account in a self-consistent way. The evaporation residue cross sections for asymmetric fusion reactions are calculated from the derived capture probabilities averaged over all orientations of the deformed projectile and target nuclei. Sargsyan, V. V.; Kanokov, Z.; Adamian, G. G.; Antonenko, N. V. 2010-03-01 224 We calculate the linear momentum flux from merging black holes (BHs) with arbitrary masses and spin orientations, using the effective-one-body (EOB) model. This model includes an analytic description of the inspiral phase, a short merger, and a superposition of exponentially damped quasi-normal ring-down modes of a Kerr BH. By varying the matching point between inspiral and ring-down, we can estimate the systematic errors generated with this method. Within these confidence limits, we find close agreement with previously reported results from numerical relativity. Using a Monte Carlo implementation of the EOB model, we are able to sample a large volume of BH parameter space and estimate the distribution of recoil velocities. For a range of mass ratios 1<=m1/m2<=10, spin magnitudes of a1,2=0.9, and uniform random spin orientations, we find that a fraction f500=0.12+0.06-0.05 of binaries have recoil velocities greater than 500 km s-1 and that a fraction f1000=0.027+0.021-0.014 of binaries have kicks greater than 1000 km s-1. These velocities likely are capable of ejecting the final BH from its host galaxy. Limiting the sample to comparable-mass binaries with m1/m2<=4, the typical kicks are even larger, with f500=0.31+0.13-0.12 and f1000=0.079+0.062-0.042. Schnittman, Jeremy D.; Buonanno, Alessandra 2007-06-01 225 We consider the effect of contact interaction in a prototypical quantum spin Hall system of pseudo-spin-1/2 particles. A strong effective magnetic field with opposite directions for the two spin states restricts two-dimensional particle motion to the lowest Landau level. While interaction between same-spin particles leads to incompressible correlated states at fractional filling factors as known from the fractional quantum Hall effect, these states are destabilized by interactions between opposite spin particles. Exact results for two particles with opposite spin reveal a quasi-continuous spectrum of extended states with a large density of states at low energy. This has implications for the prospects of realizing the fractional quantum spin Hall effect in electronic or ultra-cold atom systems. Numerical diagonalization is used to extend the two-particle results to many bosonic particles and trapped systems. The interplay between an external trapping potential and spin-dependent interactions is shown to open up new possibilities for engineering exotic correlated many-particle states with ultra-cold atoms. Fialko, O.; Brand, J.; Zülicke, U. 2014-02-01 226 We give an overview of the Integer Quantum Hall Effect. We propose a mathematical framework using Non-Commutative Geometry as defined by A. Connes. Within this framework, it is proved that the Hall conductivity is quantized and that plateaux occur when the Fermi energy varies in a region of localized states. J. Bellissard; A. van Elst; H. Schulz-Baldes 1994-01-01 227 The nonperturbative renormalization group flow of quantum Einstein gravity (QEG) is reviewed. It is argued that at large distances there could be strong renormalization effects, including a scale dependence of Newton's constant, which mimic the presence of dark matter at galactic and cosmological scales. Reuter, Martin; Weyer, Holger 228 The nonperturbative renormalization group flow of Quantum Einstein Gravity (QEG) is reviewed. It is argued that there could be strong renormalization effects at large distances, in particular a scale dependent Newton constant, which mimic the presence of dark matter at galactic and cosmological scales. Reuter, M.; Weyer, H. 229 SciTech Connect Effects of quantum correction in the Bose-Hubbard model at finite temperature are investigated for a homogeneous atomic Bose gas in an optical lattice near its superfluid-insulator transition. Starting from a strong coupling limit, higher order quantum corrections due to the hopping interaction is included in a local approximation (a dynamical mean field approximation) of the non-crossing approximation. When the upper or lower Hubbard band approaches zero energy, there appears a shallow band in the middle of the Hubbard gap due to a strong correlation in the system. Matsumoto, Hideki; Takahashi, Kiyoshi; Ohashi, Yoji [Institute of Physics, University of Tsukuba, Ibaraki 305-8571 (Japan) 2006-09-07 230 We investigate the influence of the Unruh effect on three-qubit quantum games. In particular, we interpret the quantum Prisoners’ Dilemma, which is a famous, non-zero sum game both for entangled and unentangled initial states and show that the acceleration of non-inertial frames disturbs the symmetry of the game. Using the various strategies, the novel Nash equilibrium is obtained at infinite acceleration (r = ?/4). As a remarkable point, it is shown that in our three-player system, in contrast to the two-player quantum game in non-inertial frames (see Khan et al 2011 J. Phys. A: Math. Theor. 44 355302), there is not a dominant strategy (even classical strategy) in the game and choosing the quantum strategy by each player can be the dominant strategy depending on the kind of strategy chosen by others. Since the entangled states of particles play an important role in the quantum game, finally we argue that the results of the players depend on the degree of entanglement in the initial state of the game. Goudarzi, H.; Beyrami, S. 2012-06-01 231 SciTech Connect We unify the quantum Zeno effect (QZE) and the 'bang-bang' (BB) decoupling method for suppressing decoherence in open quantum systems: in both cases strong coupling to an external system or apparatus induces a dynamical superselection rule that partitions the open system's Hilbert space into quantum Zeno subspaces. Our unification makes use of von Neumann' s ergodic theorem and avoids making any of the symmetry assumptions usually made in discussions of BB. Thus we are able to generalize the BB to arbitrary fast and strong pulse sequences, requiring no symmetry, and to show the existence of two alternatives to a pulsed BB: continuous decoupling and pulsed measurements. Our unified treatment enables us to derive limits on the efficacy of the BB method: we explicitly show that the inverse QZE implies that the BB can in some cases accelerate, rather than inhibit, decoherence. Facchi, P.; Pascazio, S. [Dipartimento di Fisica, Universita di Bari I-70126 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Lidar, D.A. [Chemical Physics Theory Group, Chemistry Department, University of Toronto, 80 St. George Street, Toronto, Ontario, M5S 3H6 (Canada) 2004-03-01 232 SciTech Connect Precision navigation of spacecraft requires accurate knowledge of small forces, including the recoil force due to anisotropies of thermal radiation emitted by spacecraft systems. We develop a formalism to derive the thermal recoil force from the basic principles of radiative heat exchange and energy-momentum conservation. The thermal power emitted by the spacecraft can be computed from engineering data obtained from flight telemetry, which yields a practical approach to incorporate the thermal recoil force into precision spacecraft navigation. Alternatively, orbit determination can be used to estimate the contribution of the thermal recoil force. We apply this approach to the Pioneer anomaly using a simulated Pioneer 10 Doppler data set. Toth, Viktor T.; Turyshev, Slava G. [Ottawa, Ontario K1N 9H5 (Canada); Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California 91109-8099 (United States) 2009-02-15 233 We present a quantum-relativistic model for the nonlinear interaction between large-amplitude electromagnetic (EM) waves and a quantum plasma. The model is based on a collective Klein-Gordon equation for the relativistic electrons, which is coupled with the Maxwell equations for the EM and electrostatic fields. The model is used to derive a nonlinear dispersion relation for the EM scattering instabilities in a relativistic quantum plasma. With application to the quantum free-electron laser (FEL), a relativistic electron beam is passing through an intense EM wave used as a wiggler to produce coherent tunable radiation. The frequency upshift of the radiation is tuned by the energy of the electron beam. The nonlinear dispersion relation reveals the importance of quantum recoil effects and oblique scattering of the radiation on the gain of the quantum FEL. 2012-12-01 234 SciTech Connect The ratio of the electric to the magnetic form factor of the neutron, G_En/G_Mn, was measured via recoil polarimetry from the quasielastic d({pol-e},e'{pol-n)p reaction at three values of Q^2 [viz., 0.45, 1.15 and 1.47 (GeV/c)^2] in Hall C of the Thomas Jefferson National Accelerator Facility. Preliminary data indicate that G_En follows the Galster parameterization up to Q^2 = 1.15 (GeV/c)^2 and appears to rise above the Galster parameterization at Q^2 = 1.47 (GeV/c)^2. Richard Madey; Andrei Semenov; Simon Taylor; Aram Aghalaryan; Erick Crouse; Glen MacLachlan; Bradley Plaster; Shigeyuki Tajima; William Tireman; Chenyu Yan; Abdellah Ahmidouch; Brian Anderson; Razmik Asaturyan; O. Baker; Alan Baldwin; Herbert Breuer; Roger Carlini; Michael Christy; Steve Churchwell; Leon Cole; Samuel Danagoulian; Donal Day; Mostafa Elaasar; Rolf Ent; Manouchehr Farkhondeh; Howard Fenker; John Finn; Liping Gan; Kenneth Garrow; Paul Gueye; Calvin Howell; Bitao Hu; Mark Jones; James Kelly; Cynthia Keppel; Mahbubul Khandaker; Wooyoung Kim; Stanley Kowalski; Allison Lung; David Mack; D. Manley; Pete Markowitz; Joseph Mitchell; Hamlet Mkrtchyan; Allena Opper; Charles Perdrisat; Vina Punjabi; Brian Raue; Tilmann Reichelt; Joerg Reinhold; Julie Roche; Yoshinori Sato; Wonick Seo; Neven Simicevic; Gregory Smith; Samuel Stepanyan; Vardan Tadevosyan; Liguang Tang; Paul Ulmer; William Vulcan; John Watson; Steven Wells; Frank Wesselmann; Stephen Wood; Chen Yan; Seunghoon Yang; Lulin Yuan; Wei-Ming Zhang; Hong Guo Zhu; Xiaofeng Zhu 2003-05-01 235 We present a wide array of quantum measures on numerical solutions of one-dimensional Bose- and Fermi-Hubbard Hamiltonians for finite-size systems with open boundary conditions. Finite-size effects are highly relevant to ultracold quantum gases in optical lattices, where an external trap creates smaller effective regions in the form of the celebrated “wedding cake” structure and the local density approximation is often not applicable. Specifically, for the Bose-Hubbard Hamiltonian we calculate number, quantum depletion, local von Neumann entropy, generalized entanglement or Q measure, fidelity, and fidelity susceptibility; for the Fermi-Hubbard Hamiltonian we also calculate the pairing correlations, magnetization, charge-density correlations, and antiferromagnetic structure factor. Our numerical method is imaginary time propagation via time-evolving block decimation. As part of our study we provide a careful comparison of canonical versus grand canonical ensembles and Gutzwiller versus entangled simulations. The most striking effect of finite size occurs for bosons: we observe a strong blurring of the tips of the Mott lobes accompanied by higher depletion, and show how the location of the first Mott lobe tip approaches the thermodynamic value as a function of system size. Carr, L. D.; Wall, M. L.; Schirmer, D. G.; Brown, R. C.; Williams, J. E.; Clark, Charles W. 2010-01-01 236 SciTech Connect We present a wide array of quantum measures on numerical solutions of one-dimensional Bose- and Fermi-Hubbard Hamiltonians for finite-size systems with open boundary conditions. Finite-size effects are highly relevant to ultracold quantum gases in optical lattices, where an external trap creates smaller effective regions in the form of the celebrated 'wedding cake' structure and the local density approximation is often not applicable. Specifically, for the Bose-Hubbard Hamiltonian we calculate number, quantum depletion, local von Neumann entropy, generalized entanglement or Q measure, fidelity, and fidelity susceptibility; for the Fermi-Hubbard Hamiltonian we also calculate the pairing correlations, magnetization, charge-density correlations, and antiferromagnetic structure factor. Our numerical method is imaginary time propagation via time-evolving block decimation. As part of our study we provide a careful comparison of canonical versus grand canonical ensembles and Gutzwiller versus entangled simulations. The most striking effect of finite size occurs for bosons: we observe a strong blurring of the tips of the Mott lobes accompanied by higher depletion, and show how the location of the first Mott lobe tip approaches the thermodynamic value as a function of system size. Carr, L. D.; Schirmer, D. G. [Department of Physics, Colorado School of Mines, Golden, Colorado 80401 (United States); Joint Quantum Institute, National Institute of Standards and Technology, Gaithersburg, Maryland 20899 (United States); Wall, M. L. [Department of Physics, Colorado School of Mines, Golden, Colorado 80401 (United States); Brown, R. C.; Williams, J. E.; Clark, Charles W. [Joint Quantum Institute, National Institute of Standards and Technology, Gaithersburg, Maryland 20899 (United States) 2010-01-15 237 The authors propose and demonstrate the integration of a photodiode, a quantum-confined Stark-effect quantum-well optical modulator, and a metal-semiconductor field-effect transistor (MESFET) to make a field-effect transistor self-electrooptic effect device. This integration allows optical inputs and outputs on the surface of a GaAs-integrated circuit chip, compatible with standard MESFET processing. To provide an illustration of feasibility, the authors demonstrate signal D. A. B. Miller; M. D. Feuer; T. Y. Chang; S. C. Shunk; J. E. Henry; D. J. Burrows; D. S. Chemla 1989-01-01 238 PubMed Central In this study the impact of quantum therapy on meat quality of slaughtered pigs was investigated. For this purpose the pigs were treated with different doses of magnet-infrared-laser (MIL) radiation. Animals were divided into four groups according to radiation doses (4096, 512, and 64 Hz, and control without application), which were applied in the lumbar area of musculus longissimus dorsi (loin) at various time intervals prior to the slaughter (14 d, 24 h, and 1 h). Animals were slaughtered and the meat quality was evaluated by determining of pH value (1, 3, and 24 h post slaughter), drip loss, colour, and lactic acid and phosphoric acid amounts. MIL therapy can be used in various fields of veterinary medicine as are surgery and orthopaedics, internal medicine, dentistry, pulmonology, gastroenterology, gynaecology, urology, nephrology, and dermatology. The results achieved showed that MIL radiation used in a short period before slaughter (1 h) can cause a change in the meat quality, as reflected by the non-standard development of pH values, increases in drip loss, and changes of meat colour. Bodnar, Martin; Nagy, Jozef; Popelka, Peter; Korenekova, Beata; Macanga, Jan; Nagyova, Alena 2011-01-01 239 This thesis presents tunneling measurements on bilayer two-dimensional (2D) electrons systems in GaAs/AlGaAs double quantum wells. 2D-2D tunneling is applied here as a probe of the inter-layer correlated quantum Hall state at total Landau level filling factor nuT = 1. This bilayer state is theoretically expected to be an excitonic superfluid with an associated dissipationless current and Josephson effect. In addition to the conventional signatures of the quantum Hall effect---a pronounced minimum in Rxx and associated quantization of Rxy---the strong interlayer correlations lead to a step-like discontinuity in the tunneling I--V. Although reminiscent of the DC Josephson effect, the tunneling discontinuity has a finite extent even at the lowest temperatures (the peak in conductance, dI/dV, is strongly temperature dependent even below 15 mK. The correlations develop when the inter- and intra-layer Coulomb interactions become comparable. The relative importance of which is determined by the ratio of layer separation to average electron spacing. Although this state is theoretically expected to be an excitonic superfluid, the degree to which intra-layer tunneling is Josephson-like is controversial. At a critical layer separation the zero-bias tunneling feature is lost, which we interpret as signaling the quantum phase transition to the uncorrelated state. We study the dependence of the phase transition on electron density and relative density imbalance. In the presence of a parallel magnetic field tunneling probes the response of the spectral function at finite wave vector. These tunneling spectra directly detect the expected linearly dispersing Goldstone mode; our measurement of this mode is in good agreement with theoretical expectations. There remains deep theoretical and experimental interest in this state, which represents a unprecedented convergence in the physics of quantum Hall effects and superconductivity. Spielman, Ian Bairstow 240 SciTech Connect The properties of linear and nonlinear electrostatic waves in a strongly coupled electron-ion quantum plasma are investigated. In this study, the inertialess electrons are degenerate, while non-degenerate inertial ions are strongly correlated. The ion dynamics is governed by the continuity and the generalized viscoelastic momentum equations. The quantum forces associated with the quantum statistical pressure and the quantum recoil effect act on the degenerate electron fluid, whereas strong ion correlation effects are embedded in generalized viscoelastic momentum equation through the viscoelastic relaxation of ion correlations and ion fluid shear viscosities. Hence, the spectra of linear electrostatic modes are significantly affected by the strong ion coupling effect. In the weakly nonlinear limit, due to ion-ion correlations, the quantum plasma supports a dispersive shock wave, the dynamics of which is governed by the Korteweg-de Vries Burgers' equation. For a particular value of the quantum recoil effect, only monotonic shock structure is observed. Possible applications of our investigation are briefly mentioned. Ghosh, Samiran [Department of Applied Mathematics, University of Calcutta, 92, Acharya Prafulla Chandra Road, Kolkata 700 009 (India); Chakrabarti, Nikhil [Saha Institute of Nuclear Physics, 1/AF, Bidhannagar, Kolkata 700 064 (India); Shukla, P. K. [International Center for Advanced Studies in Physical Sciences and Institute for Theoretical Physics, Faculty of Physics and Astronomy, Ruhr University Bochum, D-44780 Bochum, Germany and Department of Mechanical and Aerospace Engineering and Centre for Energy Research, University of California San Diego, La Jolla, California 92093 (United States) 2012-07-15 241 We present a quantum transport simulation of graphene field-effect transistors based on the self consistent solution of 2D-Poisson solver and Dirac equation within the non-equilibrium Green's function formalism. The device operation of double gate 2D-graphene field effect transistors is investigated. The study emphasizes the band-to-band and Klein tunneling processes of massless carriers and the resulting features of the electrostatic modulation V. Hung Nguyen; A. Bournel; C. Chassat; P. Dollfus 2010-01-01 242 SciTech Connect Quantum Hall effect (QHE) is observed in graphene grown by chemical vapour deposition using platinum catalyst. The QHE is even seen in samples which are irregularly decorated with disordered multilayer graphene patches and have very low mobility (<500 cm{sup 2}V{sup ?1}s{sup ?1}). The effect does not seem to depend on electronic mobility and uniformity of the resulting material, which indicates the robustness of QHE in graphene. Nam, Youngwoo, E-mail: youngwoo.nam@chalmers.se [Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of) [Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of); Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE-412 96 Gothenburg (Sweden); Sun, Jie, E-mail: jie.sun@chalmers.se; Lindvall, Niclas; Kireev, Dmitry; Yurgens, August [Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE-412 96 Gothenburg (Sweden)] [Department of Microtechnology and Nanoscience, Chalmers University of Technology, SE-412 96 Gothenburg (Sweden); Jae Yang, Seung; Rae Park, Chong [Department of Materials Science and Engineering, Seoul National University, Seoul 151-747 (Korea, Republic of)] [Department of Materials Science and Engineering, Seoul National University, Seoul 151-747 (Korea, Republic of); Woo Park, Yung [Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of)] [Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of) 2013-12-02 243 SciTech Connect Typically linear optical quantum computing (LOQC) models assume that all input photons are completely indistinguishable. In practice there will inevitably be nonidealities associated with the photons and the experimental setup which will introduce a degree of distinguishability between photons. We consider a nondeterministic optical controlled-NOT gate, a fundamental LOQC gate, and examine the effect of temporal and spectral distinguishability on its operation. We also consider the effect of utilizing nonideal photon counters, which have finite bandwidth and time response. Rohde, Peter P.; Ralph, Timothy C. [Centre for Quantum Computer Technology, Department of Physics, University of Queensland, Queensland 4072 (Australia) 2005-03-01 244 PubMed We demonstrate experimentally the submicron size self-assembled (SA) GaAs quantum rings (QRs) by quantum size effect (QSE). An ultrathin In0.1?Ga0.9As layer with different thickness is deposited on the GaAs to modulate the surface nucleus diffusion barrier, and then the SA QRs are grown. It is found that the density of QRs is affected significantly by the thickness of inserted In0.1?Ga0.9As, and the diffusion barrier modulation reflects mainly on the first five monolayer . The physical mechanism behind is discussed. The further analysis shows that about 160 meV decrease in diffusion barrier can be achieved, which allows the SA QRs with density of as low as one QR per 6 ?m2. Finally, the QRs with diameters of 438 nm and outer diameters of 736 nm are fabricated using QSE. PMID:23006618 Tong, Cunzhu; Yoon, Soon Fatt; Wang, Lijun 2012-01-01 245 We present a wide array of quantum measures on numerical solutions of one-dimensional Bose- and Fermi-Hubbard Hamiltonians for finite-size systems with open boundary conditions. Specifically, for the Bose-Hubbard Hamiltonian we calculate number, quantum depletion, local von Neumann entropy, generalized entanglement or Q measure, fidelity, and fidelity susceptibility; for the Fermi-Hubbard Hamiltonian we also calculate the pairing correlations, magnetization, charge-density correlations, and antiferromagnetic structure factor. Our numerical method is imaginary time propagation via time-evolving block decimation. As part of our study we provide a careful comparison of canonical versus grand canonical ensembles and Gutzwiller versus entangled simulations. The most striking effect of finite size occurs for bosons: we observe a strong blurring of the tips of the Mott lobes accompanied by higher depletion, and show how the location of the first Mott lobe tip approaches the thermodynamic value as a function of system size. Carr, Lincoln D.; Wall, M. L.; Schirmer, D. G.; Brown, R. C.; Williams, J. E.; Clark, Charles W. 2010-03-01 246 We provide a general formula of quantum transfer that includes the nonadiabatic effect under periodic environmental modulation by using full counting statistics in Hilbert-Schmidt space. Applying the formula to an anharmonic junction model that interacts with two bosonic environments within the Markovian approximation, we find that the quantum transfer is divided into the adiabatic (dynamical and geometrical phases) and nonadiabatic contributions. This extension shows the dependence of quantum transfer on the initial condition of the anharmonic junction just before the modulation, as well as the characteristic environmental parameters such as interaction strength and cut-off frequency of spectral density. We show that the nonadiabatic contribution represents the reminiscent effect of past modulation including the transition from the initial condition of the anharmonic junction to a steady state determined by the very beginning of the modulation. This enables us to tune the frequency range of modulation, whereby we can obtain the quantum flux corresponding to the geometrical phase by setting the initial condition of the anharmonic junction. Uchiyama, Chikako 2014-05-01 247 Magnetic-sensitive radical-ion-pair reactions are understood to underlie the biochemical magnetic compass used by avian species for navigation. Radical-ion-pair reactions were recently shown to manifest a host of quantum-information-science effects, like quantum jumps and the quantum Zeno effect. We here show that the quantum Zeno effect immunizes the magnetic and angular sensitivity of the avian compass mechanism against the deleterious and I. K. Kominis 2009-01-01 248 SciTech Connect Studies of atoms, ions and molecules with synchrotron radiation have generally focused on measurements of properties of the electrons ejected during, or after, the photoionization process. Much can also be learned, however, about the atomic or molecular relaxation process by studies of the residual ions or molecular fragments following inner-shell photoionization. Measurements are reported of mean kinetic energies of highly charged argon, krypton, and xenon recoil ions produced by vacancy cascades following inner-shell photoionization using white and monochromatic synchrotron x radiation. Energies are much lower than for the same charge-state ions produced by charged-particle impact. The results may be applicable to design of future angle-resolved ion-atom collision experiments. Photoion charge distributions are presented and compared with other measurements and calculations. Related experiments with synchrotron-radiation produced recoil ion, including photoionization of stored ions and measurement of shakeoff in near-threshold excitation, are briefly discussed. 24 refs., 6 figs., 1 tab. Levin, J.C. 1989-01-01 249 SciTech Connect The current-carrying state of a nanometer Field Effect Transistor (FET) may become unstable against the generation of high-frequency plasma waves and lead to generation of terahertz radiation. In this paper, the influences of magnetic field, quantum effects, electron exchange-correlation, and thermal motion of electrons on the instability of the plasma waves in a nanometer FET are reported. We find that, while the electron exchange-correlation suppresses the radiation power, the magnetic field, the quantum effects, and the thermal motion of electrons can enhance the radiation power. The radiation frequency increases with quantum effects and thermal motion of electrons, but decreases with electron exchange-correlation effect. Interestingly, we find that magnetic field can suppress the quantum effects and the thermal motion of electrons and the radiation frequency changes non-monotonely with the magnetic field. These properties could make the nanometer FET advantageous for realization of practical terahertz oscillations. Zhang, Li-Ping; Xue, Ju-Kui [College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070 (China)] [College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070 (China) 2013-08-15 250 We report on the observation of magnetic quantum ratchet effect in metal-oxide-semiconductor field-effect-transistors on silicon surface (Si-MOSFETs). We show that the excitation of an unbiased transistor by ac electric field of terahertz radiation at normal incidence leads to a direct electric current between the source and drain contacts if the transistor is subjected to an in-plane magnetic field. The current rises linearly with the magnetic field strength and quadratically with the ac electric field amplitude. It depends on the polarization state of the ac field and can be induced by both linearly and circularly polarized radiation. We present the quasi-classical and quantum theories of the observed effect and show that the current originates from the Lorentz force acting upon carriers in asymmetric inversion channels of the transistors. Ganichev, S. D.; Tarasenko, S. A.; Karch, J.; Kamann, J.; Kvon, Z. D. 2014-06-01 251 SciTech Connect We demonstrate that repeated measurements in disordered systems can induce a quantum anti-Zeno effect under certain conditions to enhance quantum transport. The enhancement of energy transfer is really exhibited in multisite models under repeated measurements. The optimal measurement interval for the anti-Zeno effect and the maximal efficiency of energy transfer are specified in terms of the relevant physical parameters. Since the environment acts as frequent measurements on the system, the decoherence-induced energy transfer, which has been discussed recently for photosynthetic complexes, may be interpreted in terms of the anti-Zeno effect. We further find an interesting phenomenon in a specific three-site case, where local decoherence or repeated measurements may even promote entanglement generation between the nonlocal sites. Fujii, Keisuke; Yamamoto, Katsuji [Department of Nuclear Engineering, Kyoto University, Kyoto 606-8501 (Japan) 2010-10-15 252 SciTech Connect With increasing communication rates via quantum channels, memory effects become unavoidable whenever the use rate of the channel is comparable to the typical relaxation time of the channel environment. We introduce a model of a bosonic memory channel, describing correlated noise effects in quantum-optical processes via attenuating or amplifying media. To study such a channel model, we make use of a proper set of collective field variables, which allows us to unravel the memory effects, mapping the n-fold concatenation of the memory channel to a unitarily equivalent, direct product of n single-mode bosonic channels. We hence estimate the channel capacities by relying on known results for the memoryless setting. Our findings show that the model is characterized by two different regimes, in which the cross correlations induced by the noise among different channel uses are either exponentially enhanced or exponentially reduced. Lupo, Cosmo [School of Science and Technology, University of Camerino, via Madonna delle Carceri 9, I-62032 Camerino (Italy); Giovannetti, Vittorio [NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, Piazza dei Cavalieri 7, I-56126 Pisa (Italy); Mancini, Stefano [School of Science and Technology, University of Camerino, via Madonna delle Carceri 9, I-62032 Camerino (Italy); INFN-Sezione di Perugia, I-06123 Perugia (Italy) 2010-09-15 253 PubMed Magnetic-sensitive radical-ion-pair reactions are understood to underlie the biochemical magnetic compass used by avian species for navigation. Recent experiments have provided growing evidence for the radical-ion-pair magnetoreception mechanism, while recent theoretical advances have unravelled the quantum nature of radical-ion-pair reactions, which were shown to manifest a host of quantum-information-science concepts and effects, like quantum measurement, quantum jumps and the quantum Zeno effect. We here show that the quantum Zeno effect provides for the robustness of the avian compass mechanism, and immunizes its magnetic and angular sensitivity against the deleterious and molecule-specific exchange and dipolar interactions. PMID:22142839 Dellis, A T; Kominis, I K 2012-03-01 254 Quantum tunnelling is a common fundamental quantum mechanical phenomenon that originates from the wave-like characteristics of quantum particles. Although the quantum tunnelling effect was first observed 85 years ago, some questions regarding the dynamics of quantum tunnelling remain unresolved. Here we realize a quantum tunnelling system using two-dimensional ionic structures in a linear Paul trap. We demonstrate that the charged particles in this quantum tunnelling system are coupled to the vector potential of a magnetic field throughout the entire process, even during quantum tunnelling, as indicated by the manifestation of the Aharonov–Bohm effect in this system. The tunnelling rate of the structures periodically depends on the strength of the magnetic field, whose period is the same as the magnetic flux quantum ?0 through the rotor [(0.99±0.07) × ?0]. Noguchi, Atsushi; Shikano, Yutaka; Toyoda, Kenji; Urabe, Shinji 2014-05-01 255 PubMed Quantum tunnelling is a common fundamental quantum mechanical phenomenon that originates from the wave-like characteristics of quantum particles. Although the quantum tunnelling effect was first observed 85 years ago, some questions regarding the dynamics of quantum tunnelling remain unresolved. Here we realize a quantum tunnelling system using two-dimensional ionic structures in a linear Paul trap. We demonstrate that the charged particles in this quantum tunnelling system are coupled to the vector potential of a magnetic field throughout the entire process, even during quantum tunnelling, as indicated by the manifestation of the Aharonov-Bohm effect in this system. The tunnelling rate of the structures periodically depends on the strength of the magnetic field, whose period is the same as the magnetic flux quantum ?0 through the rotor [(0.99±0.07) × ?0]. PMID:24820051 Noguchi, Atsushi; Shikano, Yutaka; Toyoda, Kenji; Urabe, Shinji 2014-01-01 256 Localized electron spins confined in semiconductor quantum dots are being studied by many groups as possible elementary qubits for solid-state quantum computation. We theoretically consider the effects of having unintentional charged impurities in laterally coupled two-dimensional double (GaAs) quantum dot systems, where each dot contains one or two electrons and a single charged impurity in the presence of an external magnetic field. We calculate the effect of the impurity on the 2-electron energy spectrum of each individual dot as well as on the spectrum of the coupled-double-dot 2-electron system. We find that the singlet-triplet exchange splitting between the two lowest energy states, both for the individual dots and the coupled dot system, depends sensitively on the location of the impurity and its coupling strength (i.e. the effective charge). We comment on the impurity effect in spin qubit operations in the double dot system based on our numerical results. This work is supported by LPS-CMTC and CNAM. Nguyen, Nga; Das Sarma, Sankar 2011-03-01 257 We present a theoretical study of thermal effect in quantum-dot cellular automata (QCA). A quantum statistical model has been introduced to obtain the thermal average of polarization of a QCA cell. We have studied the thermal effect on an inverter, a majority gate and planar arrays of different sizes. The theoretical analysis has been approximated for a two-state model where the cells are in any one of the two possible eigenstates of the cell Hamiltonian. Hence, only the ±1 polarization values are taken into account for the statistical analysis. A numerical computational model has been developed to obtain all possible configurations of the cells in an array. In general, the average polarization of each cell decreases with temperature as well as with the distance from the driver cells. We have found the temperatures for thermal breakdown. The results demonstrate the critical nature of temperature dependence for the operation of QCA. Sturzu, I.; Kanuchok, J. L.; Khatun, M.; Tougaw, P. D. 2005-03-01 258 SciTech Connect We study theoretically the properties of two Bose-Einstein condensates in different spin states, represented by a double Fock state. Individual measurements of the spins of the particles are performed in transverse directions, giving access to the relative phase of the condensates. Initially, this phase is completely undefined, and the first measurements provide random results. But a fixed value of this phase rapidly emerges under the effect of the successive quantum measurements, giving rise to a quasiclassical situation where all spins have parallel transverse orientations. If the number of measurements reaches its maximum (the number of particles), quantum effects show up again, giving rise to violations of Bell type inequalities. The violation of Bell-Clauser-Horne-Shimony-Holt inequalities with an arbitrarily large number of spins may be comparable (or even equal) to that obtained with two spins. Laloee, F. [Laboratoire Kastler Brossel, ENS, UPMC, CNRS, 24 rue Lhomond, 75005 Paris (France); Mullin, W. J. [Department of Physics, University of Massachusetts, Amherst, Massachusetts 01003 (United States) 2007-10-12 259 PubMed We study theoretically the properties of two Bose-Einstein condensates in different spin states, represented by a double Fock state. Individual measurements of the spins of the particles are performed in transverse directions, giving access to the relative phase of the condensates. Initially, this phase is completely undefined, and the first measurements provide random results. But a fixed value of this phase rapidly emerges under the effect of the successive quantum measurements, giving rise to a quasiclassical situation where all spins have parallel transverse orientations. If the number of measurements reaches its maximum (the number of particles), quantum effects show up again, giving rise to violations of Bell type inequalities. The violation of Bell-Clauser-Horne-Shimony-Holt inequalities with an arbitrarily large number of spins may be comparable (or even equal) to that obtained with two spins. PMID:17995143 Laloë, F; Mullin, W J 2007-10-12 260 PubMed Quantum Szilard engines with an arbitrary number of identical particles are studied in this paper. Analytical expressions for the total work in the low- and high-temperature limits are obtained. The total work depends on both the particle statistics, the odd-even parity, and the temperature of the system. The parity effect is drastic in fermion systems. An odd number of fermions perform work as if they were a single fermion, and an even number of fermions do not perform any work at all. For bosons, there exists a phase transition at a critical temperature under which work done by the engine is always negative. It is found that only above a certain temperature, bosonic quantum Szilard engine does more work than fermionic one. The possible experimental verification of these effects is discussed. PMID:22400530 Lu, Yao; Long, Gui Lu 2012-01-01 261 Quantum Szilard engines with an arbitrary number of identical particles are studied in this paper. Analytical expressions for the total work in the low- and high-temperature limits are obtained. The total work depends on both the particle statistics, the odd-even parity, and the temperature of the system. The parity effect is drastic in fermion systems. An odd number of fermions perform work as if they were a single fermion, and an even number of fermions do not perform any work at all. For bosons, there exists a phase transition at a critical temperature under which work done by the engine is always negative. It is found that only above a certain temperature, bosonic quantum Szilard engine does more work than fermionic one. The possible experimental verification of these effects is discussed. Lu, Yao; Long, Gui Lu 2012-01-01 262 We report an observation, via sensitive shot noise measurements, of charge fractionalization of chiral edge electrons in the integer quantum Hall effect regime. Such fractionalization results solely from interchannel Coulomb interaction, leading electrons to decompose to excitations carrying fractional charges. The experiment was performed by guiding a partitioned current carrying edge channel in proximity to another unbiased edge channel, leading to shot noise in the unbiased edge channel without net current, which exhibited an unconventional dependence on the partitioning. The determination of the fractional excitations, as well as the relative velocities of the two original (prior to the interaction) channels, relied on a recent theory pertaining to this measurement. Our result exemplifies the correlated nature of multiple chiral edge channels in the integer quantum Hall effect regime. Inoue, Hiroyuki; Grivnin, Anna; Ofek, Nissim; Neder, Izhar; Heiblum, Moty; Umansky, Vladimir; Mahalu, Diana 2014-04-01 263 SciTech Connect We construct a string theory realization of the 4+1d quantum Hall effect recently discovered by Zhang and Hu. The string theory picture contains coincident D4-branes forming an S{sup 4} and having D0-branes (i.e. instantons) in their world-volume. The charged particles are modeled as string ends. Their configuration space approaches in the large n limit a CP{sup 3}, which is an S{sup 2} fibration over S{sup 4}, the extra S{sup 2} being made out of the Chan-Paton degrees of freedom. An alternative matrix theory description involves the fuzzy S{sup 4}. We also find that there is a hierarchy of quantum Hall effects in odd-dimensional spacetimes, generalizing the known cases in 2 + 1d and 4 + 1d. Fabinger, Michal 2002-08-08 264 The interaction of an atomic gas confined inside a cavity containing a strong electromagnetic field is numerically and theoretically investigated in a regime where recoil effects are not negligible. The spontaneous appearance of a density grating (atomic bunching) accompanied by the onset of a coherent, back-propagating electromagnetic wave is found to be ruled by a continuous phase transition. Numerical tests allow us to convincingly prove that the transition is steered by the appearence of a periodic atomic density modulation. Consideration of different experimental relaxation mechanisms induces us to analyze the problem in nearly analytic form, in the large detuning limit, using both a Vlasov approach and a Fokker-Planck description. The application of our predictions to recent experimental findings, reported by Kruse et al. [Phys. Rev. Lett., 91 183601 (2003) ], yields a semiquantitative agreement with the observations. Javaloyes, J.; Perrin, M.; Lippi, G. L.; Politi, A. 2004-08-01 265 We study the following problem: Is it possible to explain the quantum\\u000ainterference of probabilities in the purely corpuscular model for elementary\\u000aparticles? We demonstrate that (by taking into account perturbation effects of\\u000ameasurement and preparation procedures) we can obtain $\\\\cos\\\\theta$-perturbation\\u000a(interference term) in probabilistic rule connecting preparation procedures for\\u000apurely corpuscular objects. On one hand, our investigation demonstrated that Andrei Khrennikov 2001-01-01 266 We consider small ballistic quantum dots weakly coupled to the leads in the\\u000achaotic regime and look for significant spin-orbit effects. We find that these\\u000aeffects can become quite prominent in the vicinity of degeneracies of many-body\\u000aenergies. We illustrate the idea by considering a case where the intrinsic\\u000aexchange term -JS^2 brings singlet and triplet many-body states near each Ganpathy Murthy; R. Shankar 2006-01-01 267 PubMed The lateral Casimir-Polder force between an atom and a corrugated surface should allow one to study experimentally nontrivial geometrical effects in the electromagnetic quantum vacuum. Here, we derive the theoretical expression of this force in the scattering approach. We show that large corrections to the "proximity force approximation" could be measured using present-day technology with a Bose-Einstein condensate used as a vacuum field sensor. PMID:18352246 Dalvit, Diego A R; Neto, Paulo A Maia; Lambrecht, Astrid; Reynaud, Serge 2008-02-01 268 SciTech Connect The lateral Casimir-Polder force between an atom and a corrugated surface should allow one to study experimentally nontrivial geometrical effects in the electromagnetic quantum vacuum. Here, we derive the theoretical expression of this force in the scattering approach. We show that large corrections to the 'proximity force approximation' could be measured using present-day technology with a Bose-Einstein condensate used as a vacuum field sensor. Dalvit, Diego A. R. [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Neto, Paulo A. Maia [Instituto de Fisica, UFRJ, CP 68528, Rio de Janeiro, RJ, 21941-972 (Brazil); Lambrecht, Astrid; Reynaud, Serge [Laboratoire Kastler Brossel, Case 74, CNRS, ENS, UPMC, Campus Jussieu, F-75252 Paris Cedex 05 (France) 2008-02-01 269 We examine the dilution effects in an orbital model, termed the orbital compass model, which corresponds to the two-dimensional version of the eg-orbital model. An unconventional low-dimensional orbital alignment termed the directional order is confirmed to be realized by utilizing the quantum Monte-Carlo method. Impurity dependence of the ordering temperature of the directional order is numerically examined. We show that T. Tanaka; M. Matsumoto; S. Ishihara 2007-01-01 270 This thesis studies two models of the fractional quantum Hall effect (FQHE), the bosonic (Chern-Simons-Landau-Ginzburg) description and the fermionic (composite fermion gauge theory) description. The bosonic theory attempts to describe the FQHE states at filling fractions nu={1\\/ 2n+1} while the fermionic theory attempts to describe the states at nu={p\\/ 2np±1} and the metallic states in between. Within the bosonic theory, Stephanie Hythe Curnoe 1997-01-01 271 SciTech Connect The theory of the hot electron microbolometer proposed by Nahum et al. assumed that the photon energy is thermalized in the electrons in the Cu absorber before relaxing to the lattice. Since the photons initially excite individual electrons to K{omega}>>k{sub B}T, however, direct relaxation of these hot electrons to phonons must also be considered. Theoretical estimates suggest that this extra relaxation channel increases the effective thermal conductance for K{omega}>>k{sub B}T and influences bolometer noise. Calculations of these effects are presented which predict very useful performance both for ground-based and spacebased astronomical photometry at millimeter and submillimeter wavelengths. Tang, A.; Richards, P.L. 1994-10-01 272 National Technical Information Service (NTIS) In systems bound by the Coulomb potential with a short-range distortion, the reconstruction of atomic spectrum (or Zel'dovich effect) can appear. Some peculiarities of this phenomenon for the state of nonzero angular momentum are discussed. Analytical pro... B. M. Karnakov V. D. Mur A. E. Kudryavtsev V. S. Popov 1985-01-01 273 Within the framework of a fully quantum mechanical approach we use a generalized density matrix formalism to study the spin-orbit coupling effects in a triple dot quantum shuttle. An interesting feature of this type of nanoelectromechanical systems is that the interplay between the electronic, spin, and mechanical degrees of freedom give rise to novel transport phenomena that has attracted a great deal of interest in both the applied and basic research. In this work, the effect of spin-orbit coupling is incorporated into the system by introducing non spin-conserving tunneling elements between the quantum dots. We explore the features of spin-polarized current by changing the Zeeman-split levels of the dots, and the frequency of the oscillating central dot. We show that the spin-orbit effect manifests itself as sidebands in the spin-polarized current, and that the tunneling channels can be controlled by adequately tuning the relative energies of the Zeeman-split levels, and by manipulating the current contribution from the vibrational modes. Villavicencio, Jorge; Maldonado, Irene; Cota, Ernesto; Platero, Gloria 2012-02-01 274 SciTech Connect Topological matter is characterized by the presence of a topological BF term in its long-distance effective action. Topological defects due to the compactness of the U(1) gauge fields induce quantum phase transitions between topological insulators, topological superconductors, and topological confinement. In conventional superconductivity, because of spontaneous symmetry breaking, the photon acquires a mass due to the Anderson-Higgs mechanism. In this paper we derive the corresponding effective actions for the electromagnetic field in topological superconductors and topological confinement phases. In topological superconductors magnetic flux is confined and the photon acquires a topological mass through the BF mechanism: no symmetry breaking is involved, the ground state has topological order, and the transition is induced by quantum fluctuations. In topological confinement, instead, electric charge is linearly confined and the photon becomes a massive antisymmetric tensor via the Stueckelberg mechanism. Oblique confinement phases arise when the string condensate carries both magnetic and electric flux (dyonic strings). Such phases are characterized by a vortex quantum Hall effect potentially relevant for the dissipationless transport of information stored on vortices. Diamantini, M. Cristina; Trugenberger, Carlo A. [INFN and Dipartimento di Fisica, University of Perugia, via A. Pascoli, I-06100 Perugia (Italy); SwissScientific, chemin Diodati 10, CH-1223 Cologny (Switzerland) 2011-09-01 275 SciTech Connect We construct physical semiclassical states annihilated by the Hamiltonian constraint operator in the framework of loop quantum cosmology as a method of systematically determining the regime and validity of the semiclassical limit of the quantum theory. Our results indicate that the evolution can be effectively described using continuous classical equations of motion with nonperturbative corrections down to near the Planck scale below which the Universe can only be described by the discrete quantum constraint. These results, for the first time, provide concrete evidence of the emergence of classicality in loop quantum cosmology and also clearly demarcate the domain of validity of different effective theories. We prove the validity of modified Friedmann dynamics incorporating discrete quantum geometry effects which can lead to various new phenomenological applications. Furthermore the understanding of semiclassical states allows for a framework for interpreting the quantum wave functions and understanding questions of a semiclassical nature within the quantum theory of loop quantum cosmology. Singh, Parampreet [Institute for Gravitational Physics and Geometry, Pennsylvania State University, 104 Davey Lab, University Park, Pennsylvania 16802 (United States); Vandersloot, Kevin [Institute for Gravitational Physics and Geometry, Pennsylvania State University, 104 Davey Lab, University Park, Pennsylvania 16802 (United States); Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Am Muehlenberg 1, D-14476 Golm (Germany) 2005-10-15 276 A new proton recoil telescope (PRT) detector is presented: it is composed by an active multilayer of segmented plastic scintillators as neutron to proton converter, by two silicon strip detectors and by a final thick CsI(Tl) scintillator. The PRT can be used to measure neutron spectra in the range 2-160 MeV. The detector characteristics have been studied in detail with the help of Monte Carlo simulations. The overall energy resolution of the system ranges from about 20% at the lowest neutron energy to about 2% at 160 MeV. The global efficiency is about 3×10-5. Experimental tests have been performed by using the reaction 13C(d,n) at 40 MeV deuteron energy. Donzella, A.; Barbui, M.; Bocci, F.; Bonomi, G.; Cinausero, M.; Fabris, D.; Fontana, A.; Giroletti, E.; Lunardon, M.; Moretto, S.; Nebbia, G.; Necchi, M. M.; Pesente, S.; Prete, G.; Rizzi, V.; Viesti, G.; Zenoni, A. 2010-01-01 277 SciTech Connect An experiment was performed to measure the recoil of an atom due to the absorption of up to 64 photons. Cesium atoms are magneto-optically trapped and laser-cooled, and then launched onto a fountian trajectory. Near their apogee, a series of Doppler-sensitive stimulated Raman pulses are applied. These give rise to two well-resolved sets of atomic interference fringes. The separation of the center frequencies of these sets of fringes is equal to an integral number of photon recoil shifts. The author has achieved a relative precision in the photon recoil measurement 0.1 ppm in two hours of data collection. This thesis describes this measurement and present a detailed theoretical and experimental study of systematic errors that can effect its accuracy. These results should be applicable to light pulse atomic interferometers in general. Measurement of the photon recoil allows us to determine [h bar]/m[sub cs], and hence the fine-structure constant. Straightforward changes in the apparatus should ultimately lead to a relative precision near 1 ppb. Weiss, D.S. 1993-01-01 278 The paper describes the design principle and the structures of AlGaAs high-speed light emitters based on quantum-confined Stark effect (QCSE). The scheme of high-speed switching of spontaneous emissions in these devices does not rely on changes in carrier population, but depends instead on the effects of the electric fields on the oscillator strengths in quantum-well active layers of the devices Masamichi Yamanishi 1992-01-01 279 SciTech Connect All modern routes leading to a quantum theory of gravity - i.e., perturbative quantum gravitational one-loop exact correction to the global chiral current in the standard model, string theory, and loop quantum gravity - require modification of the classical Einstein-Hilbert action for the spacetime metric by the addition of a parity-violating Chern-Simons term. The introduction of such a term leads to spacetimes that manifest an amplitude birefringence in the propagation of gravitational waves. While the degree of birefringence may be intrinsically small, its effects on a gravitational wave accumulate as the wave propagates. Observation of gravitational waves that have propagated over cosmological distances may allow the measurement of even a small birefringence, providing evidence of quantum gravitational effects. The proposed Laser Interferometer Space Antenna (LISA) will be sensitive enough to observe the gravitational waves from sources at cosmological distances great enough that interesting bounds on the Chern-Simons coupling may be found. Here we evaluate the effect of a Chern-Simons induced spacetime birefringence to the propagation of gravitational waves from such systems. Focusing attention on the gravitational waves from coalescing binary black holes systems, which LISA will be capable of observing at redshifts approaching 30, we find that the signature of Chern-Simons gravity is a time-dependent change in the apparent orientation of the binary's orbital angular momentum with respect to the observer line-of-sight, with the magnitude of change reflecting the integrated history of the Chern-Simons coupling over the worldline of the radiation wave front. While spin-orbit coupling in the binary system will also lead to an evolution of the system's orbital angular momentum, the time dependence and other details of this real effect are different than the apparent effect produced by Chern-Simons birefringence, allowing the two effects to be separately identified. In this way gravitational-wave observations with LISA may thus provide our first and only opportunity to probe the quantum structure of spacetime over cosmological distances. Alexander, Stephon; Finn, Lee Samuel; Yunes, Nicolas [Pennsylvania State University, University Park, Pennsylvania 16802 (United States) 2008-09-15 280 Based on the scattering matrix approach, we systematically investigate the anharmonic effect of the pumped current in double-barrier structures with adiabatic time-modulation of two sinusoidal AC driven potential heights. The pumped current as a function of the phase difference between the two driven potentials looks like to be sinusoidal, but actually it contains sine functions of double and more phase difference. It is found that this kind of anharmonic effect of the pumped current is determined combinedly by the Berry curvature and parameter variation loop trajectory. Therefore small ratio of the driving amplitude and the static amplitude is not necessary for harmonic pattern in the pumped current to dominate for smooth Berry curvature on the surface within the parameter variation loop. Deng, Wei-Yin; Zhong, Ke-Ju; Zhu, Rui; Deng, Wen-Ji 2014-04-01 281 The effects of polarization of a vacuum by an external gravitational field in a system of spinor, scalar, and vector mass-particles are analyzed within the framework of a model of a conformally planar space-time. Expressions for radiative corrections to the Einstein equations are derived, both of the second- and third-order. The role of these corrections in gravitational theory, in asymptotic regions of weak and strong gravitational fields are discussed. Beilin, V. A.; Vereshkov, G. M.; Grishkan, Iu. S.; Ivanov, N. M.; Nesterenko, V. A.; Poltavtsev, A. N. 1980-06-01 282 SciTech Connect It is analytically shown that the both the charge carrier dynamics in quantum dots and their capture into the quantum dots from the matrix material have a significant effect on two-state lasing phenomenon in quantum dot lasers. In particular, the consideration of desynchronization in electron and hole capture into quantum dots allows one to describe the quenching of ground-state lasing observed at high injection currents both qualitatevely and quantitatively. At the same time, an analysis of the charge carrier dynamics in a single quantum dot allowed us to describe the temperature dependences of the emission power via the ground- and excited-state optical transitions of quantum dots. Korenev, V. V., E-mail: korenev@spbau.ru; Savelyev, A. V.; Zhukov, A. E.; Omelchenko, A. V.; Maximov, M. V. [Saint Petersburg Academic University-Nanotechnology Research and Education Center (Russian Federation)] [Saint Petersburg Academic University-Nanotechnology Research and Education Center (Russian Federation) 2013-10-15 283 SciTech Connect Vacuum polarization in QED in a background gravitational field induces interactions which effectively modify the classical picture of light rays, as the null geodesics of spacetime. These interactions violate the strong equivalence principle and affect the propagation of light leading to superluminal photon velocities. Taking into account the QED vacuum polarization, we study the propagation of a bundle of rays in a background gravitational field. To do so we study the perturbative deformation of the Raychaudhuri equation through the influence of vacuum polarization on photon propagation. We analyze the contribution of the above interactions to the optical scalars, namely, shear, vorticity, and expansion using the Newman-Penrose formalism. Ahmadi, N. [Department of Physics, University of Tehran, North Karegar Avenue, Tehran 14395-547 (Iran, Islamic Republic of); Nouri-Zonoz, M. [Department of Physics, University of Tehran, North Karegar Avenue, Tehran 14395-547 (Iran, Islamic Republic of); Institute for Studies in Theoretical Physics and Mathematics, P.O. Box 19395-5531 Tehran (Iran, Islamic Republic of) 2006-08-15 284 Fluorescent molecules have widely been used to detect and visualize structure and processes in biological samples due to its extraordinary sensitivity. However, the emission spectra of flurophores are usually broad and the accurate identification is difficult. Recently, experiments show that energy shifts by Stark effect can be used to aid the identification of organic molecules [1]. Stark effect originates from the shifting/splitting of energy levels when a molecule is under an external electric field, which shows a shift/splitting of a peak in absorption/emission spectra. The size of the shift depends on the magnitude of the external field and the molecular structure. In this talk we will show our theoretical study of the peak shifts on emission spectra for a series of organic fluorophores such as tyrosine, tryptophan, rhodamine123 and coumarin314 using density functional theory. We find that a particular peak shift is determined by the local dipole moments of molecular orbitals rather than the global dipole moment of the molecule. These molecular-specific shifts in emission spectra may enable to improve molecular identification in biosensors. Our results will be compared with experimental data. [1]Unpublished, S. Sarkar, B. Kanchibotla, S. Bandyopadhyay, G. Tepper, J. Edwards, J. Anderson, and R. Kessick. Peng, Xihong; Anderson, John; Tepper, Gary; Bandyopadhyay, Supriyo; Nayak, Saroj 2008-03-01 285 We explore the role of electron correlation in quasi-one-dimensional quantum wires as the range of the interaction potential is changed and their thickness is varied by performing exact quantum Monte Carlo simulations at various electronic densities. In the case of unscreened interactions with a long-range 1/x tail there is a crossover from a liquid to a quasi-Wigner crystal state as the density decreases. When this interaction is screened, quasi-long-range order is prevented from forming, although a significant correlation with 4kF periodicity is still present at low densities. At even lower electron concentration, exchange is suppressed and the electrons behave like spinless fermions. Finally, we study the effect of electron correlations in the double quantum wire experiment [Steinberg , Phys. Rev. B 73, 113307 (2006)] by introducing an accurate model for the screening in the experiment and explicitly including the finite length of the system in our simulations. We find that decreasing the electron density continuously drives the system from a liquid to a state with quite strong 4kF correlations. This crossover takes place around 22?m-1 , near the density where the electron localization occurs in the experiment. The charge and spin velocities are also in good agreement with the experimental findings in the proximity of the crossover. We argue that correlation effects play an important role at the onset of the localization transition. Shulenburger, Luke; Casula, Michele; Senatore, Gaetano; Martin, Richard M. 2008-10-01 286 Very recently Ali et al. (2009) [5] proposed a new Generalized Uncertainty Principle (or GUP) with a linear term in Plank length. In this Letter the effect of this GUP is studied in quantum cosmological models with dust and cosmic string as the perfect fluid. For the quantum mechanical description it is possible to find the wave packet which resulted from the superposition of the stationary wave functions of the Wheeler-deWitt equation. However the norm of the wave packets turned out to be time dependent and hence the model became non-unitary. The loss of unitarity is due to the fact that the presence of the linear term in Plank length in the Generalized Uncertainty Principle made the Hamiltonian non-Hermitian. Majumder, Barun 2011-05-01 287 PubMed We find that the Kondo effect results in a new universality class for an antiferromagnetic (AF) quantum critical point (QCP) in the heavy fermion quantum transition, described by deconfined bosonic spinons with the dynamical exponent z=3. We show that the thermodynamics and transport of the z=3 AF QCP are consistent with the well-known non-Fermi liquid physics such as the divergent Grüneisen ratio with an exponent 2/3 and temperature-linear resistivity. We propose that the hallmark of the Kondo-driven AF QCP is a uniform spin susceptibility that diverges with an exponent 2/3, remarkably consistent with the experimental observations for YbRh2Si2. PMID:20482002 Kim, Ki-Seok; Jia, Chenglong 2010-04-16 288 PubMed Metallic spherical dome shells have received much attention in recent years because they have proven to possess highly impressive optical properties. The expected distinctive changes occurring owing to quantum confinement of conduction electrons in these nanoparticles as their thickness is reduced, have not been properly investigated. Here we carry out a detailed analytical derivation of the quantum contributions by introducing linearly shifted Associated Legendre Polynomials, which form an approximate orthonormal eigenbasis for the single-electron Hamiltonian of a spherical dome shell. Our analytical results clearly show the contribution of different elements of a spherical dome shell to the effective dielectric function. More specifically, our results provide an accurate, quantitative correction for the dielectric function of metallic spherical dome shells with thickness below 10 nm. PMID:24921317 Kumarasinghe, Chathurangi; Premaratne, Malin; Agrawal, Govind P 2014-05-19 289 PubMed The hydrogen bond (HB) is central to our understanding of the properties of water. However, despite intense theoretical and experimental study, it continues to hold some surprises. Here, we show from an analysis of ab initio simulations that take proper account of nuclear quantum effects that the hydrogen-bonded protons in liquid water experience significant excursions in the direction of the acceptor oxygen atoms. This generates a small but nonnegligible fraction of transient autoprotolysis events that are not seen in simulations with classical nuclei. These events are associated with major rearrangements of the electronic density, as revealed by an analysis of the computed Wannier centers and (1)H chemical shifts. We also show that the quantum fluctuations exhibit significant correlations across neighboring HBs, consistent with an ephemeral shuttling of protons along water wires. We end by suggesting possible implications for our understanding of how perturbations (solvated ions, interfaces, and confinement) might affect the HB network in water. PMID:24014589 Ceriotti, Michele; Cuny, Jérôme; Parrinello, Michele; Manolopoulos, David E 2013-09-24 290 Most present applications in time-dependent density-functional theory employ adiabatic approximations for the exchange- correlation (XC) potential, ignoring all functional dependence on densities at previous times. In this talk, we describe the electron dynamics in quantum wells beyond the adiabatic approximation, using the time-dependent optimized effective potential (TDOEP) method. In TDOEP, the XC potential is a functional of the time-dependent orbitals, and follows from an integral equation over space and time. We solve the full TDOEP integral equation for quantum well intersubband dynamics in exact exchange as well as self-interaction corrected ALDA. Various properties of the resulting time-dependent XC potential, such as its asymptotics, memory dependence, and discontinuity upon population of a new subband level are discussed. This work is supported by NSF DMR-0553485 and Research Corporation. Wijewardane, Harshani; Ullrich, Carsten A. 2007-03-01 291 PubMed Central The hydrogen bond (HB) is central to our understanding of the properties of water. However, despite intense theoretical and experimental study, it continues to hold some surprises. Here, we show from an analysis of ab initio simulations that take proper account of nuclear quantum effects that the hydrogen-bonded protons in liquid water experience significant excursions in the direction of the acceptor oxygen atoms. This generates a small but nonnegligible fraction of transient autoprotolysis events that are not seen in simulations with classical nuclei. These events are associated with major rearrangements of the electronic density, as revealed by an analysis of the computed Wannier centers and 1H chemical shifts. We also show that the quantum fluctuations exhibit significant correlations across neighboring HBs, consistent with an ephemeral shuttling of protons along water wires. We end by suggesting possible implications for our understanding of how perturbations (solvated ions, interfaces, and confinement) might affect the HB network in water. Ceriotti, Michele; Cuny, Jerome; Parrinello, Michele; Manolopoulos, David E. 2013-01-01 292 This research aims to design and control a full scale gun recoil buffering system which works under real firing impact loading conditions. A conventional gun recoil absorber is replaced with a controllable magnetorheological (MR) fluid damper. Through dynamic analysis of the gun recoil system, a theoretical model for optimal design and control of the MR fluid damper for impact loadings is derived. The optimal displacement, velocity and optimal design rules are obtained. By applying the optimal design theory to protect against impact loadings, an MR fluid damper for a full scale gun recoil system is designed and manufactured. An experimental study is carried out on a firing test rig which consists of a 30 mm caliber, multi-action automatic gun with an MR damper mounted to the fixed base through a sliding guide. Experimental buffering results under passive control and optimal control are obtained. By comparison, optimal control is better than passive control, because it produces smaller variation in the recoil force while achieving less displacement of the recoil body. The optimal control strategy presented in this paper is open-loop with no feedback system needed. This means that the control process is sensor-free. This is a great benefit for a buffering system under impact loading, especially for a gun recoil system which usually works in a harsh environment. Li, Z. C.; Wang, J. 2012-10-01 293 This is a Reply to the preceding Comment by Home and Whitaker [Phys. Rev. A 48, 2502 (1993)]. Our aim is to indicate how we apply basic quantum mechanics to quantum measurement theory. We discuss the work of Misra and Sudarshan [J. Math. Phys. 18, 576 (1977)] and defend our own work on the quantum Zeno effect. Fearn, H.; Lamb, W. E., Jr. 1993-09-01 294 The electronic band structure and dielectric properties of a GaAs quantum well have been investigated using the pseudopotential approach. The effect of quantum confinement on the electronic and dielectric properties of GaAs has been examined. It is found that significant variations in the studied properties occur at quantum well widths below 5 nm. The information may be useful in obtaining 2011-01-01 295 We calculate the thermoelectric response of a polycyclic molecular junction including electron-electron interactions. To do this, the molecular Green's function is determined via a Lanczos-based technique and ?-electron effective field theory is used to model the degrees of freedom most relevant to transport. In these junctions we find that the presence of multiple rings leads to higher order quantum interference features giving rise to dramatic enhancements of molecular thermoelectric effects, consistent with previous predictions based on Hueckel theory, which neglected electron correlations. Barr, Joshua; Stafford, Charles 2012-02-01 296 The scintillation properties of liquid helium upon the recoil of a low-energy helium atom are discussed in the context of the possible use of this medium as a detector of dark matter. It is found that the prompt scintillation yield in the range of recoil energies from a few keV to 100 keV is somewhat higher than that obtained by a linear extrapolation from the measured yield for a 5-MeV ? particle. A comparison is made of both the scintillation yield and the charge separation by an electric field for nuclear recoils and for electrons stopped in helium. Ito, T. M.; Seidel, G. M. 2013-08-01 297 A novel wavelength-dependent optical modulation technique capable of explicitly delineating the effects of quantum capture, carrier diffusion, and other intrinsic effects in quantum-well laser dynamics is described. Results for a compressively strained multiple-quantum-well laser are presented D. Vassilovski; Ta-Chung Wu; S. Kan; K. Y. Lau; C. E. Zah 1995-01-01 298 We explore several realistic methods of tuning the interactions in two-dimensional electronic systems in high magnetic fields. We argue that these experimental probes can be useful in studying the interplay of topology, quantum geometry and symmetry breaking in the fractional quantum Hall effect (FQHE). In particular, we show that the mixing of subbands and Landau levels in GaAs wide quantum wells breaks the particle-hole symmetry between the Moore-Read Pfaffian state and its particle-hole conjugate, the anti-Pfaffian, in such a way that the latter is unambiguously favored and generically describes the ground state at 5/2 filling [1]. Furthermore, the tilting of the magnetic field, or more generally variation of the band mass tensor, probes the fluctuation of the intrinsic metric degree of freedom of the incompressible fluids, and ultimately induces the crossover to the broken-symmetry and nematic phases in higher Landau levels [2]. Some of these mechanisms also lead to an enhancement of the excitation gap of the non-Abelian states, as observed in recent experiments. Finally, we compare the tuning capabilities in conventional systems with that in multilayer graphene and related materials with Dirac-type carriers where tuning the band structure and dielectric environment provides a simple and direct method to engineer more robust FQHE states and to study quantum transitions between them [3]. [4pt] [1] Z. Papic, F. D. M. Haldane, and E. H. Rezayi, arXiv:1209.6606 (2012).[0pt] [2] Bo Yang, Z. Papic, E. H. Rezayi, R. N. Bhatt, F. D. M. Haldane, Phys. Rev. B 85, 165318 (2012).[0pt] [3] Z. Papic, R. Thomale, D. A. Abanin, Phys. Rev. Lett. 107, 176602 (2011); Z. Papic, D. A. Abanin, Y. Barlas, and R. N. Bhatt, Phys. Rev. B 84, 241306(R) (2011); D. A. Abanin, Z. Papic, Y. Barlas, and R. N. Bhatt, New J. Phys. 14, 025009 (2012). Papic, Zlatko 2013-03-01 299 We investigate the way that the degenerate manifold of midgap edge states in quasicircular graphene quantum dots with zigzag boundaries supports, under free-magnetic-field conditions, strongly correlated many-body behavior analogous to the fractional quantum Hall effect (FQHE), familiar from the case of semiconductor heterostructures in high-magnetic fields. Systematic exact-diagonalization (EXD) numerical studies are presented for 5<=N<=8 fully spin-polarized electrons and for Igor Romanovsky; Constantine Yannouleas; Uzi Landman 2009-01-01 300 SciTech Connect The precise analog of the {theta}-quantization ambiguity of Yang-Mills theory exists for the real SU(2) connection formulation of general relativity. As in the former case {theta} labels representations of large gauge transformations, which are superselection sectors in loop quantum gravity. We show that unless {theta}=0, the (kinematical) geometric operators such as area and volume are not well defined on spin network states. More precisely the intersection of their domain with the dense set Cyl in the kinematical Hilbert space H of loop quantum gravity is empty. The absence of a well-defined notion of area operator acting on spin network states seems at first in conflict with the expected finite black hole entropy. However, we show that the black hole (isolated) horizon area--which in contrast to kinematical area is a (Dirac) physical observable--is indeed well defined, and quantized so that the black hole entropy is proportional to the area. The effect of {theta} is negligible in the semiclassical limit where proportionality to area holds. Rezende, Danilo Jimenez; Perez, Alejandro [Centre de Physique Theorique, Campus de Luminy, 13288 Marseille (France) 2008-10-15 301 InGaN/GaN light-emitting diodes (LEDs) grown along the polar orientations significantly suffer from the quantum confined Stark effect (QCSE) caused by the strong polarization induced electric field in the quantum wells, which is a fundamental problem intrinsic to the III-nitrides. Here, we show that the QCSE is self-screened by the polarization induced bulk charges enabled by designing quantum barriers. The InN composition of the InGaN quantum barrier graded along the growth orientation opportunely generates the polarization induced bulk charges in the quantum barrier, which well compensate the polarization induced interface charges, thus avoiding the electric field in the quantum wells. Consequently, the optical output power and the external quantum efficiency are substantially improved for the LEDs. The ability to self-screen the QCSE using polarization induced bulk charges opens up new possibilities for device engineering of III-nitrides not only in LEDs but also in other optoelectronic devices. Zhang, Zi-Hui; Liu, Wei; Ju, Zhengang; Tiam Tan, Swee; Ji, Yun; Kyaw, Zabu; Zhang, Xueliang; Wang, Liancheng; Wei Sun, Xiao; Volkan Demir, Hilmi 2014-06-01 302 This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends. McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G. 2014-05-01 303 PubMed This work considers how the properties of hydrogen bonded complexes, X-H?Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H?O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends. PMID:24811647 McKenzie, Ross H; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G 2014-05-01 304 Quantum effect on Rayleigh-Taylor instability of stratified plasma layer through a porous medium are investigated. The linear growth rate is obtained analytically and is analyzed. In the presence of quantum effect, both the porosity of porous medium and the medium permeability has different influence on the coup point ( kcoup?) for stability, but they do not have influence on the critical point ( kc?) for stability. The quantum effect plays the principal role of the complete stability case for the system considered. Hoshoudy, G. A. 2009-07-01 305 SciTech Connect It is shown that quantum effects lead to a significant decrease of the glass transition temperature Tg with respect to the melting temperature Tm, so that the ratio Tg=Tm can be much smaller than the typical value of 2=3 in materials where Tg is near or below 60 K. Furthermore, it is demonstrated that the viscosity or structural relaxation time in such low temperature glass formers should exhibit highly unusual temperature dependence, namely a decrease of the apparent activation energy upon approaching Tg (instead of traditional increase). Novikov, Vladimir [ORNL; Sokolov, Alexei P [ORNL 2013-01-01 306 SciTech Connect Electron density perturbation from carbon monoxide adsorption on a multi-hundred atom gold nanoparticle. The perturbation causes significant quantum size effects in CO catalysis on gold particles. Science: Jeff Greeley and Nick Romero, Argonne National Laboratory; Jesper Kleis, Karsten Jacobsen, Jens Nørskov, Technical University of Denmark? Visualization: Joseph Insley, Argonne National Laboratory This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy under contract DE-AC02-06CH11357. None 2010-01-01 307 We investigate polarization-resolved fine structure in the photoluminescence (PL) in the fractional quantum Hall effect regime at B =4-6 T, where small Zeeman energy allows spin-depolarized ground states. We observe up to five distinct peaks with characteristic polarization and temperature dependence in the vicinity of ? =1/3 and quenching of the PL from triplet charged quasiexcitons at around ? =1/4. Those findings appear to be consistent with results of exact diagonalization on a Haldane sphere including all spin configurations and are understood to be PL from fractionally charged quasiexcitons. Nomura, S.; Yamaguchi, M.; Tamura, H.; Akazaki, T.; Hirayama, Y.; Korkusinski, M.; Hawrylak, P. 2014-03-01 308 SciTech Connect We study the Kondo Lattice and Hubbard models on a triangular lattice for band filling factor 3/4. We show that a simple non-coplanar chiral spin ordering (scalar spin chirality) is naturally realized in both models due to perfect nesting of the fermi surface. The resulting triple-Q magnetic ordering is a natural counterpart of the collinear Neel ordering of the half-filled square lattice Hubbard model. We show that the obtained chiral phase exhibits a spontaneous quantum Hall-effect with {sigma}{sub xy} = e{sup 2}/h. Martin, Ivar [Los Alamos National Laboratory; Batista, Cristian D [Los Alamos National Laboratory 2008-01-01 309 We study the quantum Hall effect (QHE) in graphene based on the current injection model, which takes into account the finite rectangular geometry with source and drain electrodes. In our model, the presence of disorder, the edge-state picture, extended states, and localized states, which are believed to be indispensable ingredients in describing the QHE, do not play an important role. Instead the boundary conditions during the injection into the graphene sheet, which are enforced by the presence of the Ohmic contacts, determine the current-voltage characteristics. Kramer, Tobias; Kreisbeck, Christoph; Krueckl, Viktor; Heller, Eric J.; Parrott, Robert E.; Liang, Chi-Te 2010-02-01 310 ScienceCinema Electron density perturbation from carbon monoxide adsorption on a multi-hundred atom gold nanoparticle. The perturbation causes significant quantum size effects in CO catalysis on gold particles. Science: Jeff Greeley and Nick Romero, Argonne National Laboratory; Jesper Kleis, Karsten Jacobsen, Jens Nørskov, Technical University of Denmark? Visualization: Joseph Insley, Argonne National Laboratory This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy under contract DE-AC02-06CH11357. 311 The theories of recoil-induced resonances (RIR) [J. Guo, P. R. Berman, B. Dubetsky, and G. Grynberg, Phys. Rev. A 46, 1426 (1992)] and the collective atomic recoil laser (CARL) [R. Bonifacio and L. De Salvo, Nucl. Instrum. Methods Phys. Res. A 341, 360 (1994)] are compared. Both theories can be used to derive expressions for the gain experienced by a probe field interacting with an ensemble of two-level atoms that are simultaneously driven by a pump field. It is shown that the underlying formalisms of the RIR and CARL are equivalent. Differences between the RIR and CARL arise because the theories are typically applied for different ranges of the parameters appearing in the theory. The RIR limit is one in which the time derivative of the probe field amplitude, dE2/dt, depends locally on E2(t) and the gain depends linearly on the atomic density, while the CARL limit is one in which dE2/dt=?t0f(t,t')E2(t')dt', where f is a kernel, and the gain has a nonlinear dependence on the atomic density. Validity conditions for the RIR or CARL limits are established in terms of the various parameters characterizing the atom-field interaction. The probe gain for a probe-pump detuning equal to zero is analyzed in some detail, in order to understand how gain arises in a system which, at first glance, appears to have a symmetry that would preclude the possibility for gain. Moreover, it is shown that these calculations, carried out in perturbation theory, have a range of applicability beyond the recoil problem. Experimental possibilities for observing CARL are discussed. Berman, P. R. 1999-01-01 312 The N2P research program funded by the INFN committee for Experimental Nuclear Physics (CSNIII) has among his goals the construction of a Proton Recoil Telescope (PRT), a detector to measure neutron energy spectra. The interest in such a detector is primarily related to the SPES project for rare beams production at the Laboratori Nazionali di Legnaro. For the SPES project it is, in fact, of fundamental importance to have reliable information about energy spectra and yield for neutrons produced by d or p projectiles on thick light targets to model the ''conversion target'' in which the p or d are converted in neutrons. These neutrons, in a second stage, will induce the Uranium fission in the ''production target''. The fission products are subsequently extracted, selected and re-accelerated to produce the exotic beam. The neutron spectra and angular distribution are important parameters to define the final production of fission fragments. In addition, this detector can be used to measure neutron spectra in the field of cancer therapy (this topic is nowadays of particular interest to INFN, for the National Centre for Hadron therapy (CNAO) in Pavia) and space applications. Cinausero, M.; Barbui, M.; Prete, G.; Rizzi, V.; Andrighetto, A.; Pesente, S.; Fabris, D.; Lunardon, M.; Nebbia, G.; Viesti, G.; Moretto, S.; Morando, M.; Zenoni, A.; Bocci, F.; Donzella, A.; Bonomi, G.; Fontana, A. 2006-05-01 313 Optical absorption is investigated by self-consistent density matrix approach in asymmetric double quantum wells driven by an intense terahertz field and a direct current electric field polarized along the growth direction. Rich nonlinear dynamic behaviors of sideband absorption peaks are systematically studied in undoped asymmetric double quantum wells. When only in presence of a resonant terahertz field, the Autler-Townes splitting of the sideband peaks becomes pronounced with increasing the strength of the terahertz field. Quantum confined Stark effect of sideband peaks is discussed when an invariant terahertz field and a direct current electric field are simultaneously applied to the quantum well. It is shown that the sideband peaks of the 1s main absorption peak undergo a red-shift and the sideband peaks of the 2s main absorption peak undergo a blue-shift with increasing intensity of the direct current electric field. The presented results have potential applications in electro-optical devices. Hong-wei, Wu; Xian-wu, Mi; Yong-gang, Huang; Ke-hui, Song 2013-01-01 314 Quantum gravity phenomenology opens up the possibility of probing Planck scale physics. Thus, by exploiting the generic properties that a semiclassical state of the compound system fermions plus gravity should have, an effective dynamics of spin-1/2 particles is obtained within the framework of loop quantum gravity. Namely, at length scales much larger than Planck length lP˜10-33 cm and below the wavelength of the fermion, the spin-1/2 dynamics in flat spacetime includes Planck scale corrections. In particular we obtain modified dispersion relations in vacuo for fermions. These corrections yield a time of arrival delay of the spin-1/2 particles with respect to a light signal and, in the case of neutrinos, a novel flavor oscillation. To detect these effects the corresponding particles must be highly energetic and should travel long distances. Hence neutrino bursts accompanying gamma ray bursts or ultrahigh energy cosmic rays could be considered. Remarkably, future neutrino telescopes may be capable of testing such effects. This paper provides a detailed account of the calculations and elaborates on results previously reported in a Letter. These are further amended by introducing a real parameter ? aimed at encoding our lack of knowledge of scaling properties of the gravitational degrees of freedom. Alfaro, Jorge; Morales-Técotl, Hugo A.; Urrutia, Luis F. 2002-12-01 315 We experimentally demonstrate a dynamic fashion of quantum Zeno effect in nuclear magnetic resonance systems. The frequent measurements are implemented through quantum entanglement between the target qubit(s) and the measuring qubit, which dynamically results from the unitary evolution of duration ?m due to dispersive coupling. Experimental results testify to the presence of “the critical measurement time effect,” that is, the quantum Zeno effect does not occur when ?m takes some critical values, even if the measurements are frequent enough. Moreover, we provide an experimental demonstration of an entanglement preservation mechanism based on such a dynamic quantum Zeno effect. Zheng, Wenqiang; Xu, D. Z.; Peng, Xinhua; Zhou, Xianyi; Du, Jiangfeng; Sun, C. P. 2013-03-01 316 We look at the relationship between the preparation method of Si and Ge nanostructures (NSs) and the structural, electronic, and optical properties in terms of quantum confinement (QC). QC in NSs causes a blue shift of the gap energy with decreasing NS dimension. Directly measuring the effect of QC is complicated by additional parameters, such as stress, interface and defect states. In addition, differences in NS preparation lead to differences in the relevant parameter set. A relatively simple model of QC, using a particle-in-a-box'-type perturbation to the effective mass theory, was applied to Si and Ge quantum wells, wires and dots across a variety of preparation methods. The choice of the model was made in order to distinguish contributions that are solely due to the effects of QC, where the only varied experimental parameter was the crystallinity. It was found that the hole becomes de-localized in the case of amorphous materials, which leads to stronger confinement effects. The origin of this result was partly attributed to differences in the effective mass between the amorphous and crystalline NS as well as between the electron and hole. Corrections to our QC model take into account a position dependent effective mass. This term includes an inverse length scale dependent on the displacement from the origin. Thus, when the deBroglie wavelength or the Bohr radius of the carriers is on the order of the dimension of the NS the carriers feel' the confinement potential altering their effective mass. Furthermore, it was found that certain interface states (Si-O-Si) act to pin the hole state, thus reducing the oscillator strength. Barbagiovanni, Eric G.; Lockwood, David J.; Costa Filho, Raimundo N.; Goncharova, Lyudmila V.; Simpson, Peter J. 2013-10-01 317 SciTech Connect The obliquely propagating two-dimensional quantum dust ion-acoustic solitary waves in a magnetized quantum dusty plasma are studied by using the quantum hydrodynamic model. A linear dispersion relation is obtained using the Fourier analysis, and a nonlinear quantum Zakharov-Kuznetsov equation is derived for small-amplitude perturbations. A stationary solution of this equation is obtained to investigate the effects of quantum corrections, concentration of dust particles, and the angle of propagation on the amplitude, width, and energy of the soliton. The relevance of the present investigation to the astrophysical dusty plasmas is discussed. Khan, S. A.; Mushtaq, A.; Masood, W. [Department of Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan) and Department of Physics, Government College, Bagh AJK (Pakistan); Theoretical Plasma Physics Division, PINSTECH, P. O. Nilore, Islamabad (Pakistan) 2008-01-15 318 The aim of this lecture is to give a short survey of the Josephson effect, confining the attention to main recent achievements and some stimulating perspectives.To render the lecture self contained a brief introduction will be given to remind of much aspects of the subject.Although it is always quite difficult to make any claim of novelty for a field which is under the limelight of the scientific community since almost 40 years, the Josephson effect remains a very fashionable subject for both the underlying physics and device applications. Attention will be also payed to its unique role in the investigations of physical phenomena whose interest goes beyond the specific issue of superconductivity. Examples of the Josephson effect as a powerful tool for the investigation of quantum mechanics at a macroscopic level wil be discussed. Barone, Antonio 2000-09-01 319 NASA Technical Reports Server (NTRS) We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased. Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping 2000-01-01 320 The techniques of recoil-gating and recoil-decay tagging have been employed at Jyväskylä to perform in-beam ?-ray and electron spectroscopy studies of heavy nuclei. The JUROSPHERE ?-ray array and the SACRED electron spectrometer have been placed at the target position of the JYFL gas-filled recoil separator recoil ion transport unit (RITU). The RITU separator has been used to collect the recoils J. Uusitalo; P. Jones; P. Greenlees; P. Rahkila; M. Leino; A. N. Andreyev; P. A. Butler; T. Enqvist; K. Eskola; T. Grahn; R.-D. Herzberg; F. Hessberger; R. Julin; S. Juutinen; A. Keenan; H. Kettunen; P. Kuusiniemi; A. P. Leppänen; P. Nieminen; R. Page; J. Pakarinen; C. Scholey 2003-01-01 321 Lennard-Jones condensates in cylindrical pores are studied by path integral Monte Carlo simulations with particular emphasis on phase transitions and quantum effects. The pore diameter effect and the influence of the interaction strength between the cylinder wall and the adsorbate particles on the structures and the location of the phase boundaries is studied and the quantum effect on the phase J. Hoffmann; P. Nielaba 2003-01-01 322 We present an efficient quantum algorithm to measure the average fidelity\\u000adecay of a quantum map under perturbation using a single bit of quantum\\u000ainformation. Our algorithm scales only as the complexity of the map under\\u000ainvestigation, so for those maps admitting an efficient gate decomposition, it\\u000aprovides an exponential speed up over known classical procedures. Fidelity\\u000adecay is important David Poulin; Robin Blume-Kohout; Raymond Laflamme; Harold Ollivier 2003-01-01 323 The 40Ar\\/39Ar dating technique requires the activation of 39Ar via neutron irradiation. The energy produced by the reaction is transferred to the daughter atom as kinetic energy and triggers its displacement, known as the recoil effect. Significant amounts of 39Ar and 37Ar can be lost from minerals leading to spurious ages and biased age spectra. Through two experiments, we present Fred Jourdan; Jennifer P. Matzel; Paul R. Renne 2007-01-01 324 SciTech Connect Quasiparticle dissipation in a granular superconductor is modeled by an effective nearest-neighbor capacitance {Delta}{ital C} between the grains of a superconducting array. Using an expansion in 1/{ital z}, where {ital z} is the number of nearest neighbors in the array, I study the effects of quasiparticle dissipation on the transition temperature and short-range order of a granular superconductor. In agreement with experimental results, quasiparticle dissipation suppresses the quantum fluctuations in a superconducting array. If the self-capacitance of a grain is {ital C}{sub 0}, then both the long-range and the short-range order of the array are enhanced as the ratio {lambda}={ital C}{sub 0}/{ital z}{Delta}{ital C} decreases. In disagreement with other work, the transition temperature is not reentrant for any value of {lambda}. The results of this formalism, which consistently treats quantum fluctuations to first order in 1/{ital z}, should be valid in three-dimensional materials. Fishman, R.S. (Department of Physics, North Dakota State University, Fargo, ND (USA)) 1990-08-01 325 SciTech Connect This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project studied quantum resonance effects on chemical reactions. The authors accurate reactive scattering calculations showed that quantum resonance phenomena dominate most chemical reactions and are essential to any real understanding of reactivity. It was found that, as long-lived metastable states of the colliding system, resonances can decay to reactants, products, or a mixture of both. Only the latter contribute to reaction. Conditions under which resonances can be neglected or treated statistically were studied. Important implications about the mechanism of recombination reactions were discovered, and some remarkable effects of geometric phases on the symmetries and energies of resonances were also discovered. Calculations were completed for the reaction H + O{sub 2} {yields} OH + O, which is the rate limiting step in the combustion of all hydrocarbons and the single most important reaction in all of combustion chemistry. Pack, R.; Kendrick, B.; Kress, J.; Walker, R. [Los Alamos National Lab., NM (United States); Hayes, E. [Ohio State Univ., Columbus, OH (United States); Lagana, A. [Univ. of Perugia (Italy); Parker, G. [Univ. of Oklahoma, Norman, OK (United States); Butcher, E. [Auburn Univ., AL (United States) 1996-04-01 326 The effect of gravitational tidal forces on renormalized quantum fields propagating in curved spacetime is investigated and a generalisation of the optical theorem to curved spacetime is proved. In the case of QED, the interaction of tidal forces with the vacuum polarization cloud of virtual e + e - pairs dressing the renormalized photon has been shown to produce several novel phenomena. In particular, the photon field amplitude can locally increase as well as decrease, corresponding to a negative imaginary part of the refractive index, in apparent violation of unitarity and the optical theorem. Below threshold decays into e + e - pairs may also occur. In this paper, these issues are studied from the point of view of a non-equilibrium initial-value problem, with the field evolution from an initial null surface being calculated for physically distinct initial conditions and for both scalar field theories and QED. It is shown how a generalised version of the optical theorem, valid in curved spacetime, allows a local increase in amplitude while maintaining consistency with unitarity. The picture emerges of the field being dressed and undressed as it propagates through curved spacetime, with the local gravitational tidal forces determining the degree of dressing and hence the amplitude of the renormalized quantum field. These effects are illustrated with many examples, including a description of the undressing of a photon in the vicinity of a black hole singularity. Hollowood, Timothy J.; Shore, Graham M. 2012-02-01 327 The possibility of growing complex-shaped nanodot structures of various material composition allows optimization of certain physical parameters. In the present work, we present effective analytical methods for computing conduction-band eigenstates in quantum-dot structures of complex shape. Comparison with detailed finite-element computations is made. The electronic bandstructure model used is a one-band k\\cdot?c {p} model assuming infinite barriers. Results based on two semi-analytical models are presented. The first model employs geometrical perturbation theory to obtain the quantitative effect of quantum-dot surface perturbations on electron energy levels. Furthermore, the method output includes the level of degeneracy and variations with geometry to be assessed. The second model allows both energy levels and eigenstates to be easily determined for three-dimensional axisymmetrical GaAs structures of varying radius embedded in an AlGaAs matrix by extending a method originally due to Stevenson on electromagnetic waveguide structures (Stevenson in J. Appl. Phys. 22:1447, 1951) to account for electron states. The latter model simplifies the description of a three-dimensional partial-differential equation problem into a small set of ordinary differential equations. For structures with a large aspect ratio, the small set reduces to a single ordinary differential equation yet maintaining high accuracy. A case study is presented to exemplify the models shown. Lassen, B.; Willatzen, M. 2009-08-01 328 The quantum Hall effect in graphene p-n junctions is studied numerically with emphasis on the effect of disorder at the interface of two adjacent regions. Conductance plateaus are found to be attached to the intensity of the disorder and are accompanied by universal conductance fluctuations in the bipolar regime, which is in good agreement with theoretical predictions of the random matrix theory on quantum chaotic cavities. The calculated Fano factors can be used in an experimental identification of the underlying transport character. Li, Jian; Shen, Shun-Qing 2008-11-01 329 SciTech Connect The effective interactions which provide a wavevector and frequency dependent restoring force for collective modes in quantum liquids are derived for the helium liquids by means of physical arguments and sum rule and continuity considerations. A simple model is used to take into account mode-mode coupling between collective and multiparticle excitations, and the results for the zero-temperature liquid /sup 4/He phonon-maxon-roton spectrum are shown to compare favorably with experiment and with microscopic calculation. The role played by spin-dependent backflow in liquid /sup 3/He is analyzed, and a physical interpretation of its variation with density and spin-polarization is presented. A progress report is given on recent work on effective interactions and elementary excitations in nuclear matter, with particular attention to features encountered in the latter system which have no counterparts in the helium liquids. Pines, D. 1986-01-01 330 PubMed Central The effect of electrostatic shielding of the polarization fields in nanostructures at high carrier densities is studied. A simplified analytical model, employing screened, exponentially decaying polarization potentials, localized at the edges of a QW, is introduced for the ES-shielded quantum confined Stark effect (QCSE). Wave function trapping within the Debye-length edge-potential causes blue shifting of energy levels and gradual elimination of the QCSE red-shifting with increasing carrier density. The increase in the e?h wave function overlap and the decrease of the radiative emission time are, however, delayed until the “edge-localization” energy exceeds the peak-voltage of the charged layer. Then the wave function center shifts to the middle of the QW, and behavior becomes similar to that of an unbiased square QW. Our theoretical estimates of the radiative emission time show a complete elimination of the QCSE at doping densities ?1020 cm?3, in quantitative agreement with experimental measurements. 2009-01-01 331 We present a systematic first principles density functional theory (DFT) based study of the (020) surface of ?-plutonium using the projector-augmented-wave formalism as implemented in the Vienna Ab Initio Simulation Package (VASP). The surface was modeled by a periodic slab geometry comprised of anti-ferromagnetic atomic layers, with a thickness of up to ten atomic layers. The total and cohesive energies indicate a monotonically decreasing and increasing slope to the bulk values, respectively. The surface energies, in contrast to the work functions, exhibit a significant oscillatory pattern indicating persistent quantum size effects and possible magnetic frustration as well as other effects. The 5f electron density of states indicates progressive delocalization with increasing slab thickness. Hernandez, S. C.; Ray, A. K.; Taylor, C. D. 2013-10-01 332 Spin control has recently attracted attention for applications in spin-based devices. Different effects and applied fields have been suggested to accomplish the goal. We explore the time evolution of electronic spin in coupled quantum dots under harmonic electric fields. Using the Floquet formalism, we obtain the time dependent wave function in terms of the Floquet states and the quasi-energy spectrum for a single electron in double InSb dots. The spatial part of the wave function includes the SIA and BIA spin-orbit effects. The spectral force is analyzed at anti-crossings of the quasi-energy bands as a function of the field strength. The resulting dynamical symmetries and the way they reflect in the time evolution of the spin clouds will be discussed. Meza-Montes, Lilia; Hernandez, Arezky H.; Ulloa, Sergio E. 2007-03-01 333 SciTech Connect The ability to interferometrically detect inertial rotations via the Sagnac effect has been a strong stimulus for the development of atom interferometry because of the potential 10{sup 10} enhancement of the rotational phase shift in comparison to optical Sagnac gyroscopes. Here we analyze ballistic transport of matter waves in a one-dimensional chain of N coherently coupled quantum rings in the presence of a rotation of angular frequency {omega}. We show that the transmission probability, T, exhibits zero transmission stop gaps as a function of the rotation rate interspersed with regions of rapidly oscillating finite transmission. With increasing N, the transition from zero transmission to the oscillatory regime becomes an increasingly sharp function of {omega} with a slope {partial_derivative}T/{partial_derivative}{omega}{approx}N{sup 2}. The steepness of this slope dramatically enhances the response to rotations in comparison to conventional single ring interferometers such as the Mach-Zehnder interferometer and leads to a phase sensitivity well below the quantum shot-noise limit typical of atom interferometers. Search, Christopher P.; Toland, John R. E.; Zivkovic, Marko [Department of Physics and Engineering Physics, Stevens Institute of Technology, Hoboken, New Jersey 07030 (United States) 2009-05-15 334 Quantum fluctuations in the background geometry of a black hole are shown to affect the propagation of matter states falling into the black hole in a foliation that corresponds to observations purely outside the horizon. A state that starts as a Minkowski vacuum at past null infinity gets entangled with the gravity sector, so that close to the horizon it can be represented by a statistical ensemble of orthogonal states. We construct an operator connecting the different states and comment on the possible physical meaning of the above construction. The induced energy-momentum tensor of these states is computed in the neighbourhood of the horizon, and it is found that energy-momentum fluctuations become large in the region where the bulk of the Hawking radiation is produced. The background spacetime as seen by an outside observer may be drastically altered in this region, and an outside observer should see significant interactions between the infalling matter and the outgoing Hawking radiation. The boundary of the region of strong quantum gravitational effects is given by a time-like hypersurface of constant Schwarzschild radius $r$ one Planck unit away from the horizon. This boundary hypersurface is an example of a stretched horizon. 1995-02-01 335 PubMed We study quantum effects in a spin-3/2 antiferromagnet on the pyrochlore lattice in an external magnetic field, focusing on the vicinity of a plateau in the magnetization at half the saturation value, observed in CdCr2O4 and HgCr2O4. Our theory, based on quantum fluctuations, predicts the existence of a symmetry-broken state on the plateau, even with only nearest-neighbor microscopic exchange. This symmetry-broken state consists of a particular arrangement of spins polarized parallel and antiparallel to the field in a 3:1 ratio on each tetrahedron. It quadruples the lattice unit cell, and reduces the space group from Fd3m to P4(3)32. We also predict that for fields just above the plateau, the low-temperature phase has transverse spin order, describable as a Bose-Einstein condensate of magnons. Other comparisons to and suggestions for experiments are discussed. PMID:16606312 Bergman, Doron L; Shindou, Ryuichi; Fiete, Gregory A; Balents, Leon 2006-03-10 336 We explore the charge transport mechanism in organic semiconductors based on a model that accounts for the thermal intermolecular disorder at work in pure crystalline compounds, as well as extrinsic sources of disorder that are present in current experimental devices. Starting from the Kubo formula, we describe a theoretical framework that relates the time-dependent quantum dynamics of electrons to the frequency-dependent conductivity. The electron mobility is then calculated through a relaxation time approximation that accounts for quantum localization corrections beyond Boltzmann theory, and allows us to efficiently address the interplay between highly conducting states in the band range and localized states induced by disorder in the band tails. The emergence of a “transient localization” phenomenon is shown to be a general feature of organic semiconductors that is compatible with the bandlike temperature dependence of the mobility observed in pure compounds. Carrier trapping by extrinsic disorder causes a crossover to a thermally activated behavior at low temperature, which is progressively suppressed upon increasing the carrier concentration, as is commonly observed in organic field-effect transistors. Our results establish a direct connection between the localization of the electronic states and their conductive properties, formalizing phenomenological considerations that are commonly used in the literature. Ciuchi, S.; Fratini, S. 2012-12-01 337 The Quantum Hall effect (QHE) of a two-dimensional (2D) electron gas in a strong magnetic field is one of the most fascinating quantum phenomena discovered in condensed matter physics. In this work we propose to study the transport properties of the single layer and bilayer of graphene at the charge neutrality point (CNP) and compare it with random magnetic model developed in theoretical papers in which we argue that at CNP graphene layer is still inhomogeneous, very likely due to random potential of impurities. The random potential fluctuations induce smooth fluctuations in the local filling factor around ?=0. In this case the transport is determined by special class of trajectories, the snake states'', propagating along contour ?=0. The situation is very similar to the transport of a two-dimensional particles moving in a spatially modulated random magnetic field with zero mean value. We especially emphasize that our results may be equally relevant to the composite fermions description of the half-filled Landau level. Leon, Jorge A.; Gusev, Guennadii M.; Plentz, Flavio O. 2013-03-01 338 PubMed Central An understanding of hydrogen diffusion on metal surfaces is important not only for its role in heterogeneous catalysis and hydrogen fuel cell technology but also because it provides model systems where tunneling can be studied under well-defined conditions. Here we report helium spin–echo measurements of the atomic-scale motion of hydrogen on the Ru(0001) surface between 75 and 250 K. Quantum effects are evident at temperatures as high as 200 K, while below 120 K we observe a tunneling-dominated temperature-independent jump rate of 1.9 × 109 s–1, many orders of magnitude faster than previously seen. Quantum transition-state theory calculations based on ab initio path-integral simulations reproduce the temperature dependence of the rate at higher temperatures and predict a crossover to tunneling-dominated diffusion at low temperatures. However, the tunneling rate is underestimated, highlighting the need for future experimental and theoretical studies of hydrogen diffusion on this and other well-defined surfaces. 2013-01-01 339 We study several dynamical properties of a recently proposed implementation of the quantum transverse-field Ising chain in the framework of circuit quantum electrodynamics (QED). Particular emphasis is placed on the effects of disorder on the nonequilibrium behavior of the system. We show that small amounts of fabrication-induced disorder in the system parameters do not jeopardize the observation of previously predicted phenomena. Based on a numerical extraction of the mean free path of a wave packet in the system, we also provide a simple quantitative estimate for certain disorder effects on the nonequilibrium dynamics of the circuit QED quantum simulator. We discuss the transition from weak to strong disorder, characterized by the onset of Anderson localization of the system's wave functions, and the qualitatively different dynamics it leads to. Viehmann, Oliver; von Delft, Jan; Marquardt, Florian 2013-03-01 340 PubMed We find theoretically a new quantum state of matter-the valley-polarized quantum anomalous Hall state in silicene. In the presence of Rashba spin-orbit coupling and an exchange field, silicene hosts a quantum anomalous Hall state with Chern number C=2. We show that through tuning the Rashba spin-orbit coupling, a topological phase transition results in a valley-polarized quantum anomalous Hall state, i.e., a quantum state that exhibits the electronic properties of both the quantum valley Hall state (valley Chern number Cv=3) and quantum anomalous Hall state with C=-1. This finding provides a platform for designing dissipationless valleytronics in a more robust manner. PMID:24679320 Pan, Hui; Li, Zhenshan; Liu, Cheng-Cheng; Zhu, Guobao; Qiao, Zhenhua; Yao, Yugui 2014-03-14 341 The path integral Monte Carlo (PIMC) method is used to simulate liquid neon at T=40 K. It is shown that quantum effects are not negligible and that when the quantum effective pair potential is used in a classical molecular dynamics simulation the results obtained for the radial distribution function agrees with that predicted by a full path integral Monte Carlo D. Thirumalai; Randall W. Hall; B. J. Berne 1984-01-01 342 There are two known distinct types of the integer quantum Hall effect. One is the conventional quantum Hall effect, characteristic of two-dimensional semiconductor systems, and the other is its relativistic counterpart observed in graphene, where charge carriers mimic Dirac fermions characterized by Berry's phase pi, which results in shifted positions of the Hall plateaus. Here we report a third type K. S. Novoselov; E. McCann; S. V. Morozov; V. I. Fal'Ko; M. I. Katsnelson; U. Zeitler; D. Jiang; F. Schedin; A. K. Geim 2006-01-01 343 We develop a simple kinetic equation description of edge-state dynamics in the fractional quantum Hall effect (FQHE), which allows us to examine in detail equilibration processes between multiple edge modes. As in the integer quantum Hall effect, intermode equilibration is a prerequisite for quantization of the Hall conductance. Two sources for such equilibration are considered: edge-impurity scattering and equilibration by C. L. Kane; Matthew P. A. Fisher 1995-01-01 344 The novel properties of quantum dots revealed the evolution of the bulk electronic structure with increasing size, which has been calculated using spin density functional theory (SDFT). Both the theoretical and experimental information on spherical quantum dots, that is, physical and chemical properties like effective potential, specific shell structure and the structural relationship are sparse. The interaction effects of the Manickam Mahendran 2006-01-01 345 SciTech Connect The effects of the electron spin interaction on the pure instability and propagation modes of the quantum electrostatic waves are investigated in cold quantum electron plasmas. It is found that the influence of the electron spin interaction increases the group velocity of the propagation mode of the quantum electrostatic wave. In addition, it is shown that the electron spin interaction enhances the growth rate of the instability mode of the quantum electrostatic wave. It is also found that the effects of the electron spin interaction would be more important in the domain of small Fermi wave numbers. Ki, Dae-Han; Jung, Young-Dae [Department of Applied Physics, Hanyang University, Ansan, Kyunggi-Do 426-791 (Korea, Republic of) 2011-09-19 346 PubMed Fluctuations of local fields cause decoherence of quantum objects. Usually at high temperatures, thermal noises are much stronger than quantum fluctuations unless the thermal effects are suppressed by certain techniques such as spin echo. Here we report the discovery of strong quantum-fluctuation effects of nuclear spin baths on free-induction decay of single electron spins in solids at room temperature. We find that the competition between the quantum and thermal fluctuations is controllable by an external magnetic field. These findings are based on Ramsey interference measurement of single nitrogen-vacancy center spins in diamond and numerical simulation of the decoherence, which are in excellent agreement. PMID:22666535 Liu, Gang-Qin; Pan, Xin-Yu; Jiang, Zhan-Feng; Zhao, Nan; Liu, Ren-Bao 2012-01-01 347 PubMed Central Fluctuations of local fields cause decoherence of quantum objects. Usually at high temperatures, thermal noises are much stronger than quantum fluctuations unless the thermal effects are suppressed by certain techniques such as spin echo. Here we report the discovery of strong quantum-fluctuation effects of nuclear spin baths on free-induction decay of single electron spins in solids at room temperature. We find that the competition between the quantum and thermal fluctuations is controllable by an external magnetic field. These findings are based on Ramsey interference measurement of single nitrogen-vacancy center spins in diamond and numerical simulation of the decoherence, which are in excellent agreement. Liu, Gang-Qin; Pan, Xin-Yu; Jiang, Zhan-Feng; Zhao, Nan; Liu, Ren-Bao 2012-01-01 348 SciTech Connect Based on the quantum hydrodynamics theory, a proposed model for quantum dust acoustic waves (QDAWs) is presented including the dust size distribution (DSD) effect. A quantum version of Zakharov-Kuznetsov equation is derived adequate for describing QDAWs. Two different DSD functions are applied. The relevance of the wave velocity, amplitude, and width to the DSD is investigated numerically. The quantum effect changes only the soliton width. A brief conclusion is presented to the current findings and their relevance to astrophysics data is also discussed. El-Labany, S. K.; El-Taibany, W. F.; Behery, E. E. [Department of Physics, Faculty of Science, Mansoura University, Damietta Branch, Damietta El-Gedida, P.O. 34517 (Egypt); El-Siragy, N. M. [Department of Physics, Faculty of Science, Tanta University, Tanta, P.O. 31527 (Egypt) 2009-09-15 349 SciTech Connect Based on the one component plasma model, a new dispersion relation and group velocity of elliptically polarized extraordinary electromagnetic waves in a superdense quantum magnetoplasma are derived. The group velocity of the extraordinary wave is modified due to the quantum forces and magnetization effects within a certain range of wave numbers. It means that the quantum spin-1/2 effects can reduce the transport of energy in such quantum plasma systems. Our work should be of relevance for the dense astrophysical environments and the condensed matter physics. Li Chunhua; Ren Haijun; Yang Weihong [Department of Modern Physics, University of Science and Technology of China, 230026 Hefei (China); Wu Zhengwei [Department of Modern Physics, University of Science and Technology of China, 230026 Hefei (China); Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon (Hong Kong); Chu, Paul K. [Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon (Hong Kong) 2012-12-15 350 NASA Technical Reports Server (NTRS) We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased. Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping 2000-01-01 351 PubMed Central Fibrin fibers form the structural scaffold of blood clots. Thus, their mechanical properties are of central importance to understanding hemostasis and thrombotic disease. Recent studies have revealed that fibrin fibers are elastomeric despite their high degree of molecular ordering. These results have inspired a variety of molecular models for fibrin’s elasticity, ranging from reversible protein unfolding to rubber-like elasticity. An important property that has not been explored is the timescale of elastic recoil, a parameter that is critical for fibrin’s mechanical function and places a temporal constraint on molecular models of fiber elasticity. Using high-frame-rate imaging and atomic force microscopy-based nanomanipulation, we measured the recoil dynamics of individual fibrin fibers and found that the recoil was orders of magnitude faster than anticipated from models involving protein refolding. We also performed steered discrete molecular-dynamics simulations to investigate the molecular origins of the observed recoil. Our results point to the unstructured ?C regions of the otherwise structured fibrin molecule as being responsible for the elastic recoil of the fibers. Hudson, Nathan E.; Ding, Feng; Bucay, Igal; O'Brien, E. Timothy; Gorkun, Oleg V.; Superfine, Richard; Lord, Susan T.; Dokholyan, Nikolay V.; Falvo, Michael R. 2013-01-01 352 We have discovered a potential black hole recoil candidate offset from a nearby dwarf galaxy by 0.8 kpc. The object, is a point source that shows broad Balmer lines and was originally classified as a supernova because of its non-detection in 2005. We however, detect it in recent observations indicating it is still luminous and shows variability over 63 years from DSS, SDSS, and Pan-Starrs data obtained since 1950. The object shows broad Balmer, Fe II, Ca II, and He I lines consistent with classical AGN optical spectra, but offset by 300 km/s from the galaxy redshift. The observed narrow line emission is consistent with originating from host galaxy contamination. Our adaptive optics observations constrain the source size to be smaller than 10 pc, suggesting that all of the emission is coming from an extremely small region. Overall these properties are consistent with theoretical predictions of a runaway black hole caused by general relativistic effects predicted in black hole mergers. Koss, Michael; Blecha, L.; Mushotzky, R.; Veilleux, S.; Hung, C.; Man, A.; Li, Y. 2014-01-01 353 Bulk bismuth is an efficient thermoelectric material. Assuming intrinsic conditions, the theory of quantum confinement of bismuth nanowires by Hicks and Dresselhaus predicts a semimetal-to-semiconductor transformation for critical diameters of around 50 nm. For nanowires of diameters below the critical diameter, electronic states can be considered to be one dimensional and therefore the thermopower can be very large. However, angle-resolved photoemission spectroscopy (ARPES) studies of Bi planar surfaces present direct evidence of heavy mass surface states that can inhibit the semimetal-to-semiconductor transformation. We present a study of the Fermi surface of Bi nanowires of diameters ranging between 200 and 30 nm employing the Shubnikov-de Haas method. Our results can be understood in terms of the model of surface states. For 30 nm nanowires we find that the Fermi surface is spherical, that the carriers have high effective mass, and that the number of carriers corresponds to that inferred from ARPES measurements. Huber, T. E.; Nikolaeva, A.; Gitsu, D.; Konopko, L.; Graf, M. J. 2007-03-01 354 We consider the steady-state thermoelectric transport through a vibrating molecular quantum dot that is contacted to macroscopic leads. For moderate electron-phonon interaction strength and comparable electronic and phononic timescales, we investigate the impact of the formation of a local polaron on the thermoelectric properties of the junction. We apply a variational Lang-Firsov transformation and solve the equations of motion in the Kadanoff-Baym formalism up to second order in the dot-lead coupling parameter. We calculate the thermoelectric current and voltage for finite temperature differences in the resonant and inelastic tunneling regimes. For a near resonant dot level, the formation of a local polaron can boost the thermoelectric effect because of the Franck-Condon blockade. The line shape of the thermoelectric voltage signal becomes asymmetrical due to the varying polaronic character of the dot state and in the nonlinear transport regime, vibrational signatures arise. Koch, T.; Loos, J.; Fehske, H. 2014-04-01 355 Quantum anomalous Hall effect (QAHE) is a fundamental transport phenomenon in the field of condensed-matter physics. Without external magnetic field, spontaneous magnetization combined with spin-orbit coupling give rise to a quantized Hall conductivity. So far, a number of theoretical proposals have been made to realize the QAHE, but all based on inorganic materials. Here, using first-principles calculations, we predict a family of 2D organic topological insulators (OTIs) for realizing the QAHE. Designed by assembling molecular building blocks of triphenyl-transition-metal compounds into a hexagonal lattice, this new classes of organic materials are shown to have a nonzero Chern number and exhibit a gapless chiral edge state within the Dirac gap. Wang, Zhengfei; Liu, Zheng; Liu, Feng 2013-03-01 356 The quantum anomalous Hall effect (QAHE) is a fundamental transport phenomenon in the field of condensed-matter physics. Without an external magnetic field, spontaneous magnetization combined with spin-orbit coupling gives rise to a quantized Hall conductivity. So far, a number of theoretical proposals have been made to realize the QAHE, but all based on inorganic materials. Here, using first-principles calculations, we predict a family of 2D organic topological insulators for realizing the QAHE. Designed by assembling molecular building blocks of triphenyl-transition-metal compounds into a hexagonal lattice, this new class of organic materials is shown to have a nonzero Chern number and exhibits a gapless chiral edge state within the Dirac gap. Wang, Z. F.; Liu, Zheng; Liu, Feng 2013-05-01 357 ct- Molecules like water have vibrational modes with zero point energy well above room temperature. As a consequence, classical molecular dynamics simulations of liquid water largely underestimate the kinetic energy of the ions, which translates into an underestimation of covalent interatomic distances. In this work, we show that it is possible to apply generalized Langevin equation with suppressed noise in combination with Nose-Hoover thermostats to achieve an efficient zero-point temperature of independent modes of liquid water. Using this method we deconstruct the competing quantum effects in liquid water. We demonstrate how the structure and dynamical modes of liquid water respond to non-equilibrium distribution of zero point temperatures on the normal modes. Ramirez, Rafa; Ganeshan, Sriram; Fernandez-Serra, M. V. 2013-03-01 358 PubMed A simple device of three laterally coupled quantum dots, the central one contacted by metal leads, provides a realization of the ferromagnetic Kondo model, which is characterized by interesting properties like a nonanalytic inverted zero-bias anomaly and an extreme sensitivity to a magnetic field. Tuning the gate voltages of the lateral dots allows us to study the transition from a ferromagnetic to antiferromagnetic Kondo effect, a simple case of a Berezinskii-Kosterlitz-Thouless transition. We model the device by three coupled Anderson impurities that we study by numerical renormalization group. We calculate the single-particle spectral function of the central dot, which at zero frequency is proportional to the zero-bias conductance, across the transition, both in the absence and in the presence of a magnetic field. PMID:23931401 Baruselli, P P; Requist, R; Fabrizio, M; Tosatti, E 2013-07-26 359 The energy spectrum of primary cosmic rays is explained by particles emitted during a thermal expansion of explosive objects inside and near the galaxy, remnants of which may be supernova and/or active talaxies, or even stars or galaxies that disappeared from our sight after the explosion. A power law energy spectrum for cosmic rays, E to the (-alpha -1, is obtained from an expansion rate T is proportional to R to the alpha. Using the solution of the Einstein equation, we obtain a spectrum which agrees very well with experimental data. The implication of an inflationary early universe on the cosmic ray spectrum is also discussed. It is also suggested that the conflict between this model and the singularity theorem in classical general relativity may be eliminated by quantum effects. Tomozawa, Y. 1985-08-01 360 PubMed Surface x-ray scattering and scanning-tunneling microscopy experiments reveal novel coarsening behavior of Pb nanocrystals grown on Si(111)-(7 x 7). It is found that quantum size effects lead to the breakdown of the classical Gibbs-Thomson analysis. This is manifested by the lack of scaling of the island densities. In addition, island decay times tau are orders of magnitude faster than expected from the classical analysis and have an unusual dependence on the growth flux F (i.e., tau is approximately 1/F). As a result, a highly monodispersed 7-layer island height distribution is found after coarsening if the islands are grown at high rather than low flux rates. These results have important implications, especially at low temperatures, for the controlled growth and self-organization of nanostructures. PMID:16605766 Jeffrey, C A; Conrad, E H; Feng, R; Hupalo, M; Kim, C; Ryan, P J; Miceli, P F; Tringides, M C 2006-03-17 361 PubMed We present a theoretical study of electron transport in Ni4 molecular transistors in the presence of Zeeman spin splitting and magnetic quantum coherence (MQC). The Zeeman interaction is extended along the leads which produces gaps in the energy spectrum which allow electron transport with spin polarized along a certain direction. We show that the coherent states in resonance with the spin up or down states in the leads induces an effective coupling between localized spin states and continuum spin states in the single molecule magnet and leads, respectively. We investigate the conductance at zero temperature as a function of the applied bias and magnetic field by means of the Landauer formula, and show that the MQC is responsible for the appearence of resonances. Accordingly, we name them MQC resonances. PMID:24918902 González, Gabriel; Leuenberger, Michael N 2014-07-01 362 We present a theoretical study of electron transport in Ni4 molecular transistors in the presence of Zeeman spin splitting and magnetic quantum coherence (MQC). The Zeeman interaction is extended along the leads which produces gaps in the energy spectrum which allow electron transport with spin polarized along a certain direction. We show that the coherent states in resonance with the spin up or down states in the leads induces an effective coupling between localized spin states and continuum spin states in the single molecule magnet and leads, respectively. We investigate the conductance at zero temperature as a function of the applied bias and magnetic field by means of the Landauer formula, and show that the MQC is responsible for the appearence of resonances. Accordingly, we name them MQC resonances. González, Gabriel; Leuenberger, Michael N. 2014-07-01 363 PubMed The quantum anomalous Hall effect (QAHE) is a fundamental transport phenomenon in the field of condensed-matter physics. Without an external magnetic field, spontaneous magnetization combined with spin-orbit coupling gives rise to a quantized Hall conductivity. So far, a number of theoretical proposals have been made to realize the QAHE, but all based on inorganic materials. Here, using first-principles calculations, we predict a family of 2D organic topological insulators for realizing the QAHE. Designed by assembling molecular building blocks of triphenyl-transition-metal compounds into a hexagonal lattice, this new class of organic materials is shown to have a nonzero Chern number and exhibits a gapless chiral edge state within the Dirac gap. PMID:23705732 Wang, Z F; Liu, Zheng; Liu, Feng 2013-05-10 364 PubMed Measurements of basal plane longitudinal rho(b)(B) and Hall rho(H)(B) resistivities were performed on highly oriented pyrolytic graphite samples in a pulsed magnetic field up to B=50 T applied perpendicular to graphene planes, and temperatures 1.5 K30 T and for all studied samples, we observed a sign change in rho(H)(B) from electron- to holelike. For our best quality sample, the measurements revealed the enhancement in rho(b)(B) for B>34 T (T=1.8 K), presumably associated with the field-driven charge density wave or Wigner crystallization transition. In addition, well-defined plateaus in rho(H)(B) were detected in the ultraquantum limit revealing possible signatures of the fractional quantum Hall effect in graphite. PMID:19792390 Kopelevich, Y; Raquet, B; Goiran, M; Escoffier, W; da Silva, R R; Pantoja, J C Medina; Luk'yanchuk, I A; Sinchenko, A; Monceau, P 2009-09-11 365 The Affleck-Kennedy-Lieb-Tasaki state (or Haldane phase) in a spin-1 chain represents a large class of gapped topological paramagnets that host symmetry-protected gapless excitations on the boundary. In this work, we show how to realize this type of featureless spin-1 state on a generic two-dimensional lattice. These states have a gapped spectrum in the bulk, but they support gapless edge states protected by spin rotational symmetry along a certain direction, and they exhibit the spin quantum Hall effect. Using a fermion representation of integer spins, we show a concrete example of such spin-1 topological paramagnets on a kagome lattice, and we suggest a microscopic spin-1 Hamiltonian that may realize it. Lu, Yuan-Ming; Lee, Dung-Hai 2014-05-01 366 When electrons are confined in two-dimensional materials, quantum-mechanically enhanced transport phenomena such as the quantum Hall effect can be observed. Graphene, consisting of an isolated single atomic layer of graphite, is an ideal realization of such a two-dimensional system. However, its behaviour is expected to differ markedly from the well-studied case of quantum wells in conventional semiconductor interfaces. This difference Yuanbo Zhang; Yan-Wen Tan; Horst L. Stormer; Philip Kim 2005-01-01 367 PubMed Central Individual InAs/GaAs quantum dots are studied by micro-photoluminescence. By varying the strength of an applied external magnetic field and/or the temperature, it is demonstrated that the charge state of a single quantum dot can be tuned. This tuning effect is shown to be related to the in-plane electron and hole transport, prior to capture into the quantum dot, since the photo-excited carriers are primarily generated in the barrier. 2010-01-01 368 This paper reports the synthesis of highly lumimescence CdSe quantum dots via wet-chemical process and the study of the surfactant concentration effect on the improvement of the photoluminescence characteristic. Here, we also discussed in detail the quantum dots synthesis procedure and the mechanism for the improvement of the luminescence characteristic of CdSe quantum dots under a different surfactant concentration. N. A. Bakar; A. A. Umar; T. H. T. Aziz; S. H. Abdullah; M. M. Salleh; M. Yahaya; B. Y. Majlis 2008-01-01 369 Silicene is a monolayer of silicon atoms forming a two-dimensional honeycomb lattice, which is experimentally manufactured this year. The low energy theory is described by Dirac electrons, but they are massive due to a relatively large spin-orbit interaction. I will explain the following properties of silicene: 1) The band structure is controllable by applying an electric field [1]. Silicene undergoes a phase transition from a topological insulator to a band insulator by applying external electric field [1]. 2) The topological phase transition can be detected experimentally by way of diamagnetism [7]. 3) There is a novel circular dichroism and spinvalley selection rules by way of photon absorption [6]. 4) Silicene shows a quantum anomalous Hall effects when ferromagnet is attached onto silicone [3]. 5) Silicene shows a photo-induced quantum Hall effects when we apply strong laser onto silicene [8]. 6) Single Dirac cone state emerges when we apply photo-irradiation and electric field, where the gap is open at the K point and closed at the K' point [8].[4pt] [1] M. Ezawa, New J. Phys. 14, 033003 (2012).[0pt] [2] M. Ezawa, J. Phys. Jpn. 81, 064705 (2012). [0pt] [3] M. Ezawa, Phys. Rev. Lett. 109, 055502 (2012)[0pt] [4] M. Ezawa, Europhysics Letters 98, 67001 (2012).[0pt] [5] M. Ezawa, J. Phys. Soc. Jpn. 81, 104713 (2012).[0pt] [6] M. Ezawa, Phys. Rev. B 86, 161407(R) (2012).[0pt] [7] M. Ezawa, cond-mat/arXiv:1205.6541 (to be published in EPJB).[0pt] [8] M. Ezawa, cond-mat/arXiv:1207.6694.[0pt] [9] M. Ezawa, cond-mat/arXiv: 1209.2580. Ezawa, Motohiko 2013-03-01 370 SciTech Connect We present Chandra High Resolution Camera observations of CID-42, a candidate recoiling supermassive black hole (SMBH) at z = 0.359 in the COSMOS survey. CID-42 shows two optical compact sources resolved in the HST/ACS image embedded in the same galaxy structure and a velocity offset of {approx}1300 km s{sup -1} between the H{beta} broad and narrow emission line, as presented by Civano et al. Two scenarios have been proposed to explain the properties of CID-42: a gravitational wave (GW) recoiling SMBH and a double Type 1/Type 2 active galactic nucleus (AGN) system, where one of the two is recoiling because of slingshot effect. In both scenarios, one of the optical nuclei hosts an unobscured AGN, while the other one, either an obscured AGN or a star-forming compact region. The X-ray Chandra data allow us to unambiguously resolve the X-ray emission and unveil the nature of the two optical sources in CID-42. We find that only one of the optical nuclei is responsible for the whole X-ray unobscured emission observed and a 3{sigma} upper limit on the flux of the second optical nucleus is measured. The upper limit on the X-ray luminosity plus the analysis of the multiwavelength spectral energy distribution indicate the presence of a star-forming region in the second source rather than an obscured SMBH, thus favoring the GW recoil scenario. However, the presence of a very obscured SMBH cannot be fully ruled out. A new X-ray feature, in a SW direction with respect to the main source, is discovered and discussed. Civano, F.; Elvis, M.; Lanzuisi, G.; Aldcroft, T.; Trichas, M.; Fruscione, A. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Bongiorno, A.; Brusa, M. [Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching (Germany); Blecha, L.; Loeb, A. [Department of Astronomy, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States); Comastri, A.; Gilli, R. [INAF-Osservatorio Astronomico di Bologna, Via Ranzani 1, Bologna 40127 (Italy); Salvato, M.; Komossa, S. [Max-Planck-Institute for Plasma Physics, Excellence Cluster, Boltzmannstrass 2, 85748 Garching (Germany); Koekemoer, A. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Mainieri, V. [ESO, Karl-Schwarzschild-Strasse 2, 85748 Garching (Germany); Piconcelli, E. [INAF-Osservatorio Astronomico di Roma, Via Frascati 33, Monteporzio-Catone 00040 (Italy); Vignali, C. [Dipartimento di Astronomia, Universita di Bologna, Via Ranzani 1, Bologna 40127 (Italy) 2012-06-10 371 SciTech Connect We compute the flux of linear momentum carried by gravitational waves emitted from spinning binary black holes at second post-Newtonian (2PN) order for generic orbits. In particular we provide explicit expressions of three new types of terms, namely, next-to-leading order spin-orbit terms at 1.5 post-Newtonian (1.5PN) order, spin-orbit tail terms at 2PN order, and spin-spin terms at 2PN order. Restricting ourselves to quasicircular orbits, we integrate the linear-momentum flux over time to obtain the recoil velocity as function of orbital frequency. We find that in the so-called superkick configuration the higher-order spin corrections can increase the recoil velocity up to a factor {approx}3 with respect to the leading-order PN prediction. Whereas the recoil velocity computed in PN theory within the adiabatic approximation can accurately describe the early inspiral phase, we find that its fast increase during the late inspiral and plunge, and the arbitrariness in determining until when it should be trusted, makes the PN predictions for the total recoil not very accurate and robust. Nevertheless, the linear-momentum flux at higher PN orders can be employed to build more reliable resummed expressions aimed at capturing the nonperturbative effects until merger. Furthermore, we provide expressions valid for generic orbits, and accurate at 2PN order, for the energy and angular momentum carried by gravitational waves emitted from spinning binary black holes. Specializing to quasicircular orbits we compute the spin-spin terms at 2PN order in the expression for the evolution of the orbital frequency and found agreement with Mikoczi, Vasuth, and Gergely. We also verified that in the limit of extreme mass ratio our expressions for the energy and angular momentum fluxes match the ones of Tagoshi, Shibata, Tanaka, and Sasaki obtained in the context of black hole perturbation theory. Racine, Etienne; Buonanno, Alessandra [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Kidder, Larry [Center for Radiophysics and Space Research, Cornell University, Ithaca, New York 14853 (United States) 2009-08-15 372 SciTech Connect The coalescence of a massive black hole (MBH) binary leads to the gravitational-wave recoil of the system and its ejection from the galaxy core. We have carried out N-body simulations of the motion of a M{sub BH} = 3.7 x 10{sup 6} M{sub sun} MBH remnant in the 'Via Lactea I' simulation, a Milky Way-sized dark matter halo. The black hole receives a recoil velocity of V{sub kick} = 80, 120, 200, 300, and 400 km s{sup -1} at redshift 1.5, and its orbit is followed for over 1 Gyr within a 'live' host halo, subject only to gravity and dynamical friction against the dark matter background. We show that, owing to asphericities in the dark matter potential, the orbit of the MBH is highly nonradial, resulting in a significantly increased decay timescale compared to a spherical halo. The simulations are used to construct a semi-analytic model of the motion of the MBH in a time-varying triaxial Navarro-Frenk-White dark matter halo plus a spherical stellar bulge, where the dynamical friction force is calculated directly from the velocity dispersion tensor. Such a model should offer a realistic picture of the dynamics of kicked MBHs in situations where gas drag, friction by disk stars, and the flattening of the central cusp by the returning black hole are all negligible effects. We find that MBHs ejected with initial recoil velocities V{sub kick} {approx}> 500 km s{sup -1} do not return to the host center within a Hubble time. In a Milky Way-sized galaxy, a recoiling hole carrying a gaseous disk of initial mass {approx}M{sub BH} may shine as a quasar for a substantial fraction of its 'wandering' phase. The long decay timescales of kicked MBHs predicted by this study may thus be favorable to the detection of off-nuclear quasar activity. Guedes, J.; Madau, P.; Diemand, J. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Kuhlen, M. [Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 (United States); Zemp, M. [Astronomy Department, University of Michigan, Ann Arbor, MI 48109 (United States) 2009-09-10 373 A strong quantum-confined Stark effect (QCSE) from light hole related transitions at the ? point (LH1-c?1) in Ge/Si0.15Ge0.85 multiple quantum wells is demonstrated from both photocurrent and optical transmission measurements. Our experimental results show a large and sharp optical absorption peak due to LH1-c?1 transitions, and its associated strong absorption change based on the QCSE. By exploiting LH1-c?1 transitions, optical modulators with improved compactness and competitive extinction ratio and optical loss can be envisioned for low energy chip-scale optical interconnect applications. Chaisakul, P.; Marris-Morini, D.; Rouifed, M. S.; Frigerio, J.; Isella, G.; Chrastina, D.; Coudevylle, J.-R.; Le Roux, X.; Edmond, S.; Bouville, D.; Vivien, L. 2013-05-01 374 NASA Technical Reports Server (NTRS) Successful implementation of technology using self-forming semiconductor Quantum Dots (QDs) has already demonstrated that temperature independent Dirac-delta density of states can be exploited in low current threshold QD lasers and QD infrared photodetectors. Leon, R.; Swift, G.; Magness, B.; Taylor, W.; Tang, Y.; Wang, K.; Dowd, P.; Zhang, Y. 2000-01-01 375 SciTech Connect Non-Maximally symmetric spaces provide a more general background to explore the relation between the geometry of the manifold and the quantum fields defined in the manifold than those with maximally symmetric spaces. A static Taub universe is used to study the effect of curvature anisotropy on the spontaneous symmetry breaking of a self-interacting scalar field. The one-loop effective potential on a lambdaphi/sup 4/ field with arbitrary coupling xi is computed by zeta function regularization. For massless minimal coupled scalar fields, first order phase transitions can occur. Keeping the shape invariant but decreasing the curvature radius of the universe induces symmetry breaking. If the curvature radius is held constant, increasing deformation can restore the symmetry. Studies on the higher-dimensional Kaluza-Klein theories are also focused on the deformation effect. Using the dimensional regularization, the effective potential of the free scalar fields in M/sup 4/ x T/sup N/ and M/sup 4/ x (Taub)/sup 3/ spaces are obtained. The stability criterions for the static solutions of the self-consistent Einstein equations are derived. Stable solutions of the M/sup 4/ x S/sup N/ topology do not exist. With the Taub space as the internal space, the gauge coupling constants of SU(2), and U(1) can be determined geometrically. The weak angle is therefore predicted by geometry in this model. Shen, T.C. 1985-01-01 376 Non-maximally symmetric spaces provide a more general background to explore the relation between the geometry of the manifold and the quantum fields defined in the manifold than those with maximally symmetric spaces. A static Taub universe is used to study the effect of curvature anisotropy on the spontaneous symmetry breaking of a self-interacting scalar field. The one-loop effective potential of lambda phi to the 4th power field with arbitrary coupling Xi is computed by zeta function regularization. We find that for massless minimal coupled scalar field, first order phase transition can occur. Keeping the shape invariant but decreasing the curvature radius of the universe induces symmetry breaking. If the curvature radius is held constant, increasing deformation can restore the symmetry. Studies on the higher-dimensional Kaluza-Klein theories are also focused on the deformation effect. Using the dimensional regularization, the effective potential of the free scalar fields in M4 x Tn and M4 x (taub) to the 3rd power spaces are obtained. The stability criterions for the static solutions of the self-consistent Einstein equations are derived. We find the stable solutions of the M4 x SN topology do not exist. With the Taub space as the internal space, the gauge coupling constants of SU(2), and U(1) can be determined geometrically. The weak angle is therefore predicted by geometry in this model. Shen, T. C. 1985-12-01 377 The detailed understanding of the dynamics of ionization processes remains a fundamental issue in atomic physics. Recently, a powerful new spectroscopic technique, cold target recoil ion momentum spectroscopy (COLTRIMS), which retains high momentum resolution while achieving almost complete solid angle coverage has been developed. Most studies have used supersonically-cooled inert gas targets to achieve high momentum resolution. We use laser-cooling techniques to introduce new targets. A magneto-optic trap of Li atoms for precision studies of ionization processes using recoil ion momentum spectroscopy has been realized. A simplified loading scheme was used to achieve the Li MOT, resulting in modest densities of order 10^9 atoms/cm^3. Simple methods to increase the trap density, commensurate with the requirement of the stabilizing the trap in the uniform electric field required for ion extraction, are being pursued. Design of the combined MOT-recoil ion spectrometer and initial tests will be described. Hasegawa, S.; Lu, Z.-T.; Young, L.; Lindsay, M.; Sibener, S. J. 1998-05-01 378 A key result of isotropic loop quantum cosmology is the existence of a quantum bounce which occurs when the energy density of the matter field approaches a universal maximum close to the Planck density. Though the bounce has been exhibited in various matter models, due to severe computational challenges, some important questions have so far remained unaddressed. These include the demonstration of the bounce for widely spread states, its detailed properties for the states when matter field probes regions close to the Planck volume and the reliability of the continuum effective spacetime description in general. In this manuscript we rigorously answer these questions using the Chimera numerical scheme for the isotropic spatially flat model sourced with a massless scalar field. We show that, as expected from an exactly solvable model, the quantum bounce is a generic feature of states even with a very wide spread, and for those which bounce much closer to the Planck volume. We perform a detailed analysis of the departures from the effective description and find some expected, and some surprising results. At a coarse level of description, the effective dynamics can be regarded as a good approximation to the underlying quantum dynamics unless the states correspond to small scalar field momenta, in which case they bounce closer to the Planck volume or are very widely spread. Quantifying the amount of discrepancy between the quantum and the effective dynamics, we find that the departure between them depends in a subtle and non-monotonic way on the field momentum and different fluctuations. Interestingly, the departures are generically found to be such that the effective dynamics overestimates the spacetime curvature, and underestimates the volume at the bounce. Diener, Peter; Gupt, Brajesh; Singh, Parampreet 2014-05-01 379 Using analytical theory and simulations, we assess the impact of quantum effects on non-linear wave-particle interactions in quantum plasmas. We more specifically focus on the resonant interaction between Langmuir waves and electrons, which, in classical plasmas, lead to particle trapping. Two regimes are identified depending on the difference between the time scale of oscillation tB(k)=?m /eEk of a trapped electron and the quantum time scale tq(k)=2m /?k2 related to recoil effect, where E and k are the wave amplitude and wave vector. In the classical-like regime, tB(k) < tq(k), resonant electrons are trapped in the wave troughs and greatly affect the evolution of the system long before the wave has had time to Landau damp by a large amount according to linear theory. In the quantum regime, tB(k) > tq(k), particle trapping is hampered by the finite recoil imparted to resonant electrons in their interactions with plasmons. Daligault, Jérôme 2014-04-01 380 We propose an initial-state-dependent quantum-dot gain without population inversion in the vicinity of a resonant metallic nanoparticle. The gain originates from the hybridization of a dark plasmon-exciton and is accompanied by efficient energy transfer from the nanoparticle to the quantum dot. This hybridization of the dark plasmon-exciton, attached to the hybridization of the bright plasmon-exciton, strengthens nonlinear light-quantum emitter interactions at the nanoscale, thus the spectral overlap between the dark and the bright plasmons enhances the gain effect. This hybrid system has potential applications in ultracompact tunable quantum devices. Zhao, Dongxing; Gu, Ying; Wu, Jiarui; Zhang, Junxiang; Zhang, Tiancai; Gerardot, Brian D.; Gong, Qihuang 2014-06-01 381 In a recent paper we have examined the short-time propagator for the Schrödinger equation of a point source. An accurate expression modulo ?t2 for the propagator showed that it was independent of the quantum potential implying that the quantum motion is classical for very short times. In this paper we apply these results to the experiment of Itano, Heinzen, Bollinger and Wineland which demonstrates the quantum Zeno effect in beryllium. We show that the transition is inhibited because the applied continuous wave radiation suppresses the quantum potential necessary for the transition to occur. This shows there is no need to appeal to wave function collapse. de Gosson, Maurice A.; Hiley, Basil J. 2014-04-01 382 We report on the evidence of a strain-induced piezoelectric field in wurtzite InAs/InP quantum rod nanowires. This electric field, caused by the lattice mismatch between InAs and InP, results in the quantum confined Stark effect and, as a consequence, affects the optical properties of the nanowire heterostructure. It is shown that the piezoelectric field can be screened by photogenerated carriers or removed by increasing temperature. Moreover, a dependence of the piezoelectric field on the quantum rod diameter is observed in agreement with simulations of wurtzite InAs/InP quantum rod nanowire heterostructures. Anufriev, Roman; Chauvin, Nicolas; Khmissi, Hammadi; Naji, Khalid; Patriarche, Gilles; Gendry, Michel; Bru-Chevallier, Catherine 2014-05-01 383 PubMed Thin films of transition metal nitrides are interesting materials due to their special features such as high hardness and chemical inertness. In our present work, we are reporting Elastic Recoil Detection Analysis of FeN/Si system using 100 MeV Au beam. Recoils ions were detected using ?E-E detector telescope. Diffraction pattern of pristine FeN thin films indicates amorphous nature of iron nitride thin film. MOKE results show irradiation can be used for controlling the magnetic properties such as coercivity. PMID:22871447 Dhunna, Renu; Jain, I P; Sahajwalla, Veena 2012-10-01 384 Based on standard field-theoretic considerations, we develop an effective action approach for investigating quantum phase transitions in lattice Bose systems at arbitrary temperature. We begin by adding to the Hamiltonian of interest a symmetry breaking source term. Using time-dependent perturbation theory, we then expand the grand-canonical free energy as a double power series in both the tunneling and the source term. From here, an order parameter field is introduced in the standard way and the underlying effective action is derived via a Legendre transformation. Determining the Ginzburg-Landau expansion to first order in the tunneling term, expressions for the Mott insulator-superfluid phase boundary, condensate density, average particle number, and compressibility are derived and analyzed in detail. Additionally, excitation spectra in the ordered phase are found by considering both longitudinal and transverse variations of the order parameter. Finally, these results are applied to the concrete case of the Bose-Hubbard Hamiltonian on a three-dimensional cubic lattice, and compared with the corresponding results from mean-field theory. Although both approaches yield the same Mott insulator-superfluid phase boundary to first order in the tunneling, the predictions of our effective action theory turn out to be superior to the mean-field results deeper into the superfluid phase. Bradlyn, Barry; Dos Santos, Francisco Ednilson A.; Pelster, Axel 2009-01-01 385 The study of open quantum systems is important for fundamental issues of quantum physics as well as for technological applications such as quantum information processing. Recent developments in this field have increased our basic understanding on how non-Markovian effects influence the dynamics of an open quantum system, paving the way to exploit memory effects for various quantum control tasks. Most often, the environment of an open system is thought to act as a sink for the system information. However, here we demonstrate experimentally that a photonic open system can exploit the information initially held by its environment. Correlations in the environmental degrees of freedom induce nonlocal memory effects where the bipartite open system displays, counterintuitively, local Markovian and global non-Markovian character. Our results also provide novel methods to protect and distribute entanglement, and to experimentally quantify correlations in photonic environments. Liu, Bi-Heng; Cao, Dong-Yang; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can; Laine, Elsi-Mari; Breuer, Heinz-Peter; Piilo, Jyrki 2013-05-01 386 PubMed Central The study of open quantum systems is important for fundamental issues of quantum physics as well as for technological applications such as quantum information processing. Recent developments in this field have increased our basic understanding on how non-Markovian effects influence the dynamics of an open quantum system, paving the way to exploit memory effects for various quantum control tasks. Most often, the environment of an open system is thought to act as a sink for the system information. However, here we demonstrate experimentally that a photonic open system can exploit the information initially held by its environment. Correlations in the environmental degrees of freedom induce nonlocal memory effects where the bipartite open system displays, counterintuitively, local Markovian and global non-Markovian character. Our results also provide novel methods to protect and distribute entanglement, and to experimentally quantify correlations in photonic environments. Liu, Bi-Heng; Cao, Dong-Yang; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can; Laine, Elsi-Mari; Breuer, Heinz-Peter; Piilo, Jyrki 2013-01-01 387 We consider a point particle in one dimension initially confined to a finite spatial region whose state is frequently monitored by projection operators onto that region. In the limit of infinitely frequent monitoring, the state never escapes from the region—this is the Zeno effect. In the corresponding classical problem, by contrast, the state diffuses out of the region with the frequent monitoring simply removing probability. The aim of this paper is to show how the Zeno effect disappears in the classical limit in this and similar examples. We give a general argument showing that the Zeno effect is suppressed in the presence of a decoherence mechanism which suppresses interference between histories. We show how this works explicitly in two examples involving projections onto a one-dimensional subspace and identify the key time scales for the process. We extend this understanding to our main problem of interest, the case of a particle in a spatial region, by coupling it to a decohering environment. Smoothed projectors are required to give the problem proper definition and this implies the existence of a momentum cutoff and minimum length scale. We show that the escape rate from the region approaches the classically expected result, and hence the Zeno effect is suppressed, as long as the environmentally induced fluctuations in momentum are sufficiently large. We establish the time scale on which an arbitrary initial state develops sufficiently large fluctuations to satisfy this condition. We link our results to earlier work on the ? ?0 limit of the Zeno effect. We illustrate our results by plotting the probability flux lines for the density matrix (which are equivalent to Bohm trajectories in the pure-state case). These illustrate both the Zeno and anti-Zeno effects very clearly, and their suppression. Our results are closely related to our earlier paper [Phys. Rev. A 88, 022128 (2013), 10.1103/PhysRevA.88.022128], demonstrating the suppression of quantum-mechanical reflection by decoherence. Bedingham, D.; Halliwell, J. J. 2014-04-01 388 A theoretical description is given of the dependence of the threshold voltage, VTH, of SOI MOSFETs on a wide range to top silicon layer thickness, ts, using both classical and quantum-mechanical methods. The quantum-mechanical effects become remarkable below the critical thickness and raise VTH with decreasing ts. The classical method cannot be applied in such a thin ts region, since Yasuhisa Omura; Seiji Horiguchi; Michiharu Tabe; Kenji Kishi 1993-01-01 389 SciTech Connect Using the quantum magnetohydrodynamic model and obtaining the dispersion relation of the Cherenkov and cyclotron waves, the acceleration of positrons by a relativistic electron beam is investigated. The Cherenkov and cyclotron acceleration mechanisms of positrons are compared together. It is shown that growth rate and, therefore, the acceleration of positrons can be increased in the presence of quantum effects. Niknam, A. R. [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Tehran (Iran, Islamic Republic of)] [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Tehran (Iran, Islamic Republic of); Aki, H.; Khorashadizadeh, S. M. [Physics Department, Birjand University, Birjand (Iran, Islamic Republic of)] [Physics Department, Birjand University, Birjand (Iran, Islamic Republic of) 2013-09-15 390 PubMed We measure the absolute absorption cross section of molecules using a matter-wave interferometer. A nanostructured density distribution is imprinted onto a dilute molecular beam through quantum interference. As the beam crosses the light field of a probe laser some molecules will absorb a single photon. These absorption events impart a momentum recoil which shifts the position of the molecule relative to the unperturbed beam. Averaging over the shifted and unshifted components within the beam leads to a reduction of the fringe visibility, enabling the absolute absorption cross section to be extracted with high accuracy. This technique is independent of the molecular density, it is minimally invasive and successfully eliminates many problems related to photon cycling, state mixing, photobleaching, photoinduced heating, fragmentation, and ionization. It can therefore be extended to a wide variety of neutral molecules, clusters, and nanoparticles. PMID:25014795 Eibenberger, Sandra; Cheng, Xiaxi; Cotter, J P; Arndt, Markus 2014-06-27 391 We investigate the sensitivity of quantum systems that are chaotic in a classical limit to small perturbations of their equations of motion. This sensitivity, originally studied in the context of defining quantum chaos, is relevant to decoherence when the environment has a chaotic classical counterpart. Karkuszewski, Zbyszek P.; Jarzynski, Christopher; Zurek, Wojciech H. 2002-10-01 392 PubMed We investigate the sensitivity of quantum systems that are chaotic in a classical limit to small perturbations of their equations of motion. This sensitivity, originally studied in the context of defining quantum chaos, is relevant to decoherence when the environment has a chaotic classical counterpart. PMID:12398653 Karkusz ewski, Zbys ek P; Jar ynski, Christopher; Zurek, Wojciech H 2002-10-21 393 We investigate the sensitivity of quantum systems that are chaotic in a classical limit to small perturbations of their equations of motion. This sensitivity, originally studied in the context of defining quantum chaos, is relevant to decoherence when the environment has a chaotic classical counterpart. Zbyszek P. Karkuszewski; Christopher Jarzynski; Wojciech H. Zurek 2002-01-01 394 We investigate the sensitivity of quantum systems that are chaotic in a\\u000aclassical limit, to small perturbations of their equations of motion. This\\u000asensitivity, originally studied in the context of defining quantum chaos, is\\u000arelevant to decoherence in situations when the environment has a chaotic\\u000aclassical counterpart. Zbyszek P. Karkuszewski; Christopher Jarzynski; Wojciech H. Zurek 2002-01-01 395 The performance of biological sensory systems is shown to reach the quantum limits to measurement, this being true in spite of the high levels of thermal noise associated with operation at phisiological temperatures. Theoretical issues associated with quantum-limited measurement at high temperatures are addressed and strategies for such measurements which make use of active filtering are formulated. Experimental and theoretical Bialek 1983-01-01 396 National Technical Information Service (NTIS) A quantum protocol is described which enables a user to send sealed messages and that allows for the detection of active eavesdroppers. We examine a class of eavesdropping strategies, those that make use of quantum operations, and we determine the informa... P. A. Lopata T. B. Bahder 2007-01-01 397 Shell phenomena in small quantum dots with a few electrons under a perpendicular magnetic field are discussed within a simple model. It is shown that various kinds of shell structures, which occur at specific values for the magnetic field lead to a disappearance of the orbital magnetization for particular magic numbers for noninteracting electrons in small quantum dots. Including the Coulomb interaction between two electrons, we found that the magnetic field gives rise to dynamical symmetries of a three-dimensional axially symmetric two-electron quantum dot with a parabolic confinement. These symmetries manifest themselves as near-degeneracy in the quantum spectrum at specific values of the magnetic field and are robust at any strength of the electron-electron interaction. A remarkable agreement between experimental data and calculations exhibits the important role of the thickness for the two-electron quantum dot for analysis of ground state transitions in a perpendicular magnetic field. Nazmitdinov, R. G. 2009-01-01 398 In light of the established differences between the quantum confinement effect and the electron affinities between hydrogen-passivated C and Si quantum dots, we carried out theoretical investigations on SiC quantum dots, with surfaces uniformly terminated by C-H or Si-H bonds, to explore the role of surface terminations on these two aspects. Surprisingly, it was found that the quantum confinement effect is present (or absent) in the highest occupied (or lowest unoccupied) molecular orbital of the SiC quantum dots regardless of their surface terminations. Thus, the quantum confinement effect related to the energy gap observed experimentally (Phys. Rev. Lett., 2005, 94, 026102) is contributed to by the size-dependence of the highest occupied states; the absence of quantum confinement in the lowest unoccupied states is in contrary to the usual belief based on hydrogen-passivated C quantum dots. However, the cause of the absence of the quantum confinement in C nanodots is not transferable to SiC. We propose a model that provides a clear explanation for all findings on the basis of the nearest-neighbor and next-nearest-neighbor interactions between the valence atomic p-orbital in the frontier occupied/unoccupied states. We also found that the electron affinities of the SiC quantum dots, which closely depend on the surface environments, are negative for the C-H termination and positive for the Si-H termination. The prediction of negative electron affinities in SiC quantum dots by simple C-H termination indicates a promising application for these materials in electron-emitter devices. Our model predicts that GeC quantum dots with hydrogen passivation exhibit similar features to SiC quantum dots and our study confirms the crucial role that the surface environment plays in these nanoscale systems.In light of the established differences between the quantum confinement effect and the electron affinities between hydrogen-passivated C and Si quantum dots, we carried out theoretical investigations on SiC quantum dots, with surfaces uniformly terminated by C-H or Si-H bonds, to explore the role of surface terminations on these two aspects. Surprisingly, it was found that the quantum confinement effect is present (or absent) in the highest occupied (or lowest unoccupied) molecular orbital of the SiC quantum dots regardless of their surface terminations. Thus, the quantum confinement effect related to the energy gap observed experimentally (Phys. Rev. Lett., 2005, 94, 026102) is contributed to by the size-dependence of the highest occupied states; the absence of quantum confinement in the lowest unoccupied states is in contrary to the usual belief based on hydrogen-passivated C quantum dots. However, the cause of the absence of the quantum confinement in C nanodots is not transferable to SiC. We propose a model that provides a clear explanation for all findings on the basis of the nearest-neighbor and next-nearest-neighbor interactions between the valence atomic p-orbital in the frontier occupied/unoccupied states. We also found that the electron affinities of the SiC quantum dots, which closely depend on the surface environments, are negative for the C-H termination and positive for the Si-H termination. The prediction of negative electron affinities in SiC quantum dots by simple C-H termination indicates a promising application for these materials in electron-emitter devices. Our model predicts that GeC quantum dots with hydrogen passivation exhibit similar features to SiC quantum dots and our study confirms the crucial role that the surface environment plays in these nanoscale systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr12099b Zhang, Zhenkui; Dai, Ying; Yu, Lin; Guo, Meng; Huang, Baibiao; Whangbo, Myung-Hwan 2012-02-01 399 In this letter, we study the impact of quantum confinement in double gate tunneling field-effect transistors with different body thicknesses in the presence of high-? gate dielectrics. Although better ON currents have been reported for these devices coming out from semiclassical simulations, the inclusion of quantum effects makes the formerly continuous conduction and valence bands become a discrete set of energy subbands, thus, increasing the effective bandgap, and consequently, reducing the current levels. If the high-? dielectric layer covers both the source and the drain, the band energy structure at the tunneling junction is modified (tunneling widths are increased), hence resulting in performance degradation. An optimal configuration seeking to improve ON currents would require low permittivity dielectrics over S/D regions along with high-? materials under the gates. Padilla, J. L.; Gámiz, F.; Godoy, A. 2013-09-01 400 The effect of magnetic contacts on spin-dependent electron transport and spin-accumulation in a quantum ring, which is threaded by a magnetic flux, is studied. The quantum ring is made up of four quantum dots, where two of them possess magnetic structure and other ones are subjected to the Rashba spin-orbit coupling. The magnetic quantum dots, referred to as magnetic quantum contacts, are connected to two external leads. Two different configurations of magnetic moments of the quantum contacts are considered; the parallel and the anti-parallel ones. When the magnetic moments are parallel, the degeneracy between the transmission coefficients of spin-up and spin-down electrons is lifted and the system can be adjusted to operate as a spin-filter. In addition, the accumulation of spin-up and spin-down electrons in non-magnetic quantum dots are different in the case of parallel magnetic moments. When the intra-dot Coulomb interaction is taken into account, we find that the electron interactions participate in separation between the accumulations of electrons with different spin directions in non-magnetic quantum dots. Furthermore, the spin-accumulation in non-magnetic quantum dots can be tuned in the both parallel and anti-parallel magnetic moments by adjusting the Rashba spin-orbit strength and the magnetic flux. Thus, the quantum ring with magnetic quantum contacts could be utilized to create tunable local magnetic moments which can be used in designing optimized nanodevices. 2014-05-01 401 NSDL National Science Digital Library Gain and Loss are the fundamental factors contributing to laser effectiveness. Simply put, the gain is the light produced by stimulated emission and loss is then the light lost. This can happen if a photon hits an electron in a low energy level state and the electron absorbs the energy and moves to a higher energy level state. It can also happen when light escapes the laser cavity. Lasing is the condition when the gain exceeds the loss. It is very important to know the gain to see how effective your laser really is. The traditional Hakki-Paoli Method was found to be ineffective for measuring gain in quantum cascade lasers. A new, more effective method of measuring gain in quantum cascade lasers was developed and tested. Haslam, Bryan 2005-08-05 402 We examine results presented by Fearn and Lamb [Phys. Rev. A 46, 1199 (1992)], who search for, but fail to find, the quantum Zeno effect in measurements of the position of a particle in a double potential well, and criticize the basic statement of the effect given by Misra and Sudarshan [J. Math. Phys. 18, 756 (1977)]. We suggest that position measurements are an inappropriate area to look for the effect; nevertheless, we show that some of the results of Fearn and Lamb should, and do, exhibit a form of weak effect. Though the collapse postulate, as used by Misra and Sudarshan, is not required to discuss quantum measurement and the quantum Zeno effect, its application describes adequately the results of measurement, and we reject the idea that the basic statement of the quantum Zeno effect is flawed. Home, D.; Whitaker, M. A. B. 1993-09-01 403 In the present work we revisit effective mass theory (EMT) for a semiconductor quantum dot (QD) and employ the BenDaniel-Duke (BDD) boundary condition. In effective mass theory mass mi inside the dot of radius R is different from the mass mo outside the dot. That gives us a crucial factor in determining the electronic spectrum namely ? = mi/m0. We show both by numerical calculations and asymptotic analysis that the ground state energy and the surface charge density, ?(r) can be large. We also show that the dependence of the ground state energy on the radius of the well is infraquadratic. We demonstrate that the significance of BDD condition is pronounced at large R. We also study the dependence of excited state on the radius as well as the difference between energy states. Both exhibit an infra quadratic behavior with radius. The energy difference is important in study of absorption and emission spectra. We find that the BDD condition substantially alters the energy difference. Hence the interpretation of experimental result may need to be reexamined. Singh, R. A.; Sinha, Abhinav; Pathak, Praveen 2011-07-01 404 Memory effects in the charge transport in arrays of CdSe nanocrystals have been observed and characterized. These semiconducting colloidal quantum dots have previously been shown to demonstrate a non-steady state current transient response to the application of a constant negative source-drain voltage bias. In this study we have shown that CdSe nanocrystals display memory of the voltage pulses applied to them. In particular, for a sequence of two negative voltage pulses, the nanocrystals' response to the second pulse will be dependent on the value and duration of the first pulse. We define the first voltage pulse as the write'' step and the second voltage pulse as the read'' step. To probe the programmability of the nanocrystals, a range of different write steps were performed and the current transients generated by the read steps were characterized. We have demonstrated the ability to undo the effect of the write steps by either shining band gap light on the nanocrystals or by applying a positive voltage bias; such events are naturally defined as erase'' steps. The full write-read-erase cycle demonstrates the potential for the application of CdSe nanocrystals to memory technology and offers new information on the charge transport. * This work is supported by the ONR Young Investigator Award # N000140410489, the American Chemical Society PRF award, and the startup funds at Penn. MF acknowledges funding from the NSF IGERT Program. Fischbein, Michael 2005-03-01 405 NASA Technical Reports Server (NTRS) Intersubband polarization couples to collective excitations of the interacting electron gas confined in a semiconductor quantum well (Qw) structure. Such excitations include correlated pair excitations (repellons) and intersubband plasmons (ISPs). The oscillator strength of intersubband transitions (ISBTs) strongly varies with QW parameters and electron density because of this coupling. We have developed a set of kinetic equations, termed the intersubband semiconductor Bloch equations (ISBEs), from density matrix theory with the Hartree-Fock approximation, that enables a consistent description of these many-body effects. Using the ISBEs for a two-conduction-subband model, various many-body effects in intersubband transitions are studied in this work. We find interesting spectral changes of intersubband absorption coefficient due to interplay of the Fermi-edge singularity, subband renormalization, intersubband plasmon oscillation, and nonparabolicity of bandstructure. Our results uncover a new perspective for ISBTs and indicate the necessity of proper many-body theoretical treatment in order for modeling and prediction of ISBT line shape. Li, Jian-Zhong; Ning, Cun-Zheng 2003-01-01 406 SciTech Connect In the present work we revisit effective mass theory (EMT) for a semiconductor quantum dot (QD) and employ the BenDaniel-Duke (BDD) boundary condition. In effective mass theory mass m{sub i} inside the dot of radius R is different from the mass m{sub o} outside the dot. That gives us a crucial factor in determining the electronic spectrum namely {beta} = m{sub i}/m{sub 0}. We show both by numerical calculations and asymptotic analysis that the ground state energy and the surface charge density, {rho}(r) can be large. We also show that the dependence of the ground state energy on the radius of the well is infraquadratic. We demonstrate that the significance of BDD condition is pronounced at large R. We also study the dependence of excited state on the radius as well as the difference between energy states. Both exhibit an infra quadratic behavior with radius. The energy difference is important in study of absorption and emission spectra. We find that the BDD condition substantially alters the energy difference. Hence the interpretation of experimental result may need to be reexamined. Singh, R. A. [Dr. H. S. Gour University, Sagar (M.P.), India 470 002 (India); Sinha, Abhinav [Electrical Engineering Department, IIT Bombay, Mumbai, India 400076 (India); Pathak, Praveen [Homi Bhabha Centre for Science Education (TIFR), V.N. Purav Marg, Mankhurd, Mumbai, India 400088 (India) 2011-07-15 407 PubMed The effect of the shape of nanocrystal sensitizers in photoelectrochemical cells is reported. CdSe quantum rods of different dimensions were effectively deposited rapidly by electrophoresis onto mesoporous TiO(2) electrodes and compared with quantum dots. Photovoltaic efficiency values of up to 2.7% were measured for the QRSSC, notably high values for TiO(2) solar cells with ex situ synthesized nanoparticle sensitizers. The quantum rod-based solar cells exhibit a red shift of the electron injection onset and charge recombination is significantly suppressed compared to dot sensitizers. The improved photoelectrochemical characteristics of the quantum rods over the dots as sensitizers is assigned to the elongated shape, allowing the build-up of a dipole moment along the rod that leads to a downward shift of the TiO(2) energy bands relative to the quantum rods, leading to improved charge injection. PMID:22452287 Salant, Asaf; Shalom, Menny; Tachan, Zion; Buhbut, Sophia; Zaban, Arie; Banin, Uri 2012-04-11 408 SciTech Connect Liquid Xenon (LXe) is expected to be an excellent target and detection medium to search for dark matter in the form of Weakly Interacting Massive Particles (WIMPs). We have measured the scintillation efficiency of nuclear recoils with kinetic energy between 10.4 and 56.5 keV relative to that of 122 keV gamma rays from {sup 57}Co. The scintillation yield of 56.5 keV recoils was also measured as a function of applied electric field, and compared to that of gamma rays and alpha particles. The Xe recoils were produced by elastic scattering of 2.4 MeV neutrons in liquid xenon at a variety of scattering angles. The relative scintillation efficiency is 0.130{+-}0.024 and 0.227{+-}0.016 for the lowest and highest energy recoils, respectively. This is about 15% less than the value predicted by Lindhard, based on nuclear quenching. Our results are in good agreement with more recent theoretical predictions that consider the additional reduction of scintillation yield due to biexcitonic collisions in LXe. Aprile, E.; Giboni, K.L.; Majewski, P.; Ni, K.; Yamashita, M.; Hasty, R.; Manzur, A.; McKinsey, D.N. [Physics Department and Astrophysics Laboratory, Columbia University, New York, New York 10027 (United States); Department of Physics, Yale University, P.O. Box 208120, New Haven, Connecticut 06520 (United States) 2005-10-01 409 The rate of escape of Rn²²° from solids containing radiothorium ; depends upon both recoil and diffusion and is steady at any temperature if the ; distribution of radioactive material remains unchanged. At very high ; temperatures the emanating power of ThOâ (and other oxides) increases with ; time at constant temperature; the emanating powertemperature characteristics ; eventually resemble those J. S. Anderson; D. J. M. Bevan; J. P. Burden 1963-01-01 410 The St. George recoil mass separator is designed for the study of low energy (?,?) reactions of astrophysical interest in inverse kinematics. The energy range of recoils will be 0.07 to 0.9 MeV/nucleon. A detection system is being developed for separating the recoils from the residual scattered beam at the focal plane. The detection system will consist of two position sensitive microchannel plate (MCPs) timing detectors separated by 50cm followed by a single sided silicon strip detector. Simulations were performed using the codes SIMION and GEANT4. Different designs for guiding the secondary electrons emitted from a thin carbon foil to the MCP were studied in the simulations. Good timing and position resolution and minimization of transmission loss due to grids were key factors in selecting the final design. Time of flight will be recorded between the two MCPs. The delay line technique will be used for extracting the position information from the MCPs. The energy of the recoils will be recorded by the Si detector. A dedicated vacuum chamber and the modular design of the detection system will facilitate future improvements and customization for particular experiments. Kalkal, S.; Hinnefeld, J.; Morales, L.; Robertson, D.; Stech, E.; Berg, G. P. A.; Gorres, J.; Couder, M.; Wiescher, M. 2012-10-01 411 The detailed understanding of the dynamics of ionization processes remains a fundamental issue in atomic physics. Recently, a powerful new spectroscopic technique, cold target recoil ion momentum spectroscopy (COLTRIMS), which retains high momentum resolution while achieving almost complete solid angle coverage has been developed. Most studies have used supersonically-cooled inert gas targets to achieve high momentum resolution. We use laser-cooling S. Hasegawa; Z.-T. Lu; L. Young; M. Lindsay; S. J. Sibener 1998-01-01 412 PubMed The effects that nanometer-sized matrices have on the properties of molecules encapsulated within the nanomatrix are not fully understood. In this work, dye-doped silica nanoparticles were employed as a model for studying the effects of a nanomatrix on the fluorescence quantum yield of encapsulated dye molecules. Two types of dye molecules were selected based on their different responses to the surrounding media. Several factors that affect fluorescence quantum yields were investigated, including aggregation of dye molecules, diffusion of atmospheric oxygen, concentration of dye molecules, and size of the nanomatrix. The results showed that the silica nanomatrix has a varied effect on the fluorescence quantum yield of encapsulated dye molecules, including enhancement, quenching and insignificant changes. Both the properties of dye molecules and the conditions of the nanomatrix played important roles in these effects. Finally, a physical model was proposed to explain the varied nanomatrix effects on the fluorescence quantum yield of encapsulated dye molecules. PMID:23958712 Liang, Song; Shephard, Kali; Pierce, David T; Zhao, Julia Xiaojun 2013-10-01 413 We construct higher-dimensional quantum Hall systems based on fuzzy spheres. It is shown that fuzzy spheres are realized as spheres in colored monopole backgrounds. The space noncommutativity is related to higher spins which is originated from the internal structure of fuzzy spheres. In 2 k-dimensional quantum Hall systems, Laughlin-like wave function supports fractionally charged excitations, q=m ( m is odd). Topological objects are ( 2k-2)-branes whose statistics are determined by the linking number related to the general Hopf map. Higher-dimensional quantum Hall systems exhibit a dimensional hierarchy, where lower-dimensional branes condense to make higher-dimensional incompressible liquid. Hasebe, Kazuki; Kimura, Yusuke 2004-11-01 414 National Technical Information Service (NTIS) The application of magneto rheological dampers for controlling recoil dynamics is examined, using a recoil demonstrator that includes a 0.50 caliber gun and a MR damper (referred to as 'recoil demonstrator'). Upon providing a brief background on MR damper... 2001-01-01 415 A methodology, Quantum Wavepacket Ab Initio Molecular Dynamics (QWAIMD), for the efficient, simultaneous dynamics of electrons and nuclei is presented. This approach allows for the quantum-dynamical treatment of a subset of nuclei in complex, molecular systems while treating the remaining nuclei and electrons within in the ab initio molecular dynamics (AIMD) paradigm. Developments of QWAIMD discussed within include: (a) a novel sampling algorithm dubbed Time-Dependent Deterministic Sampling (TDDS), which increases the computational efficiency by several orders of magnitude; (b) generalizations to hybrid QM/QM and QM/MM electronic structure methods via a combination of the ONIOM and empirical valence bond approaches, which may allow for the accurate simulation of large molecules; and (c) a novel velocity-flux autocorrelation function to calculate the vibrational density-of-states of quantum-classical systems. These techniques are benchmarked on calculations of small, hydrogen-bound clusters. Furthermore, since many chemical processes occur over time-scales inaccessible to computation, a scheme is discussed and benchmarked here which can bias both QWAIMD and classical-AIMD dynamics to sample these long time-scale events, like proton transfer in enzyme catalysis. Finally, hydrogen tunneling in an enzyme, soybean lipoxygenase-1 (SLO-1) is examined by calculating the orbitals (eigenstates) of the transferring proton along the reaction coordinate. This orbital analysis is then supplemented by using quantum measurement theory to reexamine the transfer. Sumner, Isaiah 416 SciTech Connect We investigate all pure quantum-electrodynamics corrections to the np{yields}1s, n=2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order {alpha}{sup 5}. These values are needed to extract an accurate strong interaction shift from experiment. Many small effects, such as second-order and double vacuum polarization contribution, proton and pion self-energies, finite size and recoil effects are included with exact mass dependence. Our final value differs from previous calculations by up to {approx_equal}11 ppm for the 1s state, while a recent experiment aims at a 4 ppm accuracy. Schlesser, S.; Le Bigot, E.-O.; Indelicato, P. [Laboratoire Kastler Brossel, Ecole Normale Superieure, CNRS, Universite Pierre et Marie Curie-Paris 6, Case 74, 4 place Jussieu, F-75005 Paris (France); Pachucki, K. [Faculty of Physics, University of Warsaw, Hoza 69, PL-00-681 Warsaw (Poland) 2011-07-15 417 Experimental discovery of a quantized Hall state at 5/2 filling factor presented an enigmatic finding in an established field of study that has remained an open issue for more than twenty years. In this review we first examine the experimental requirements for observing this state and outline the initial theoretical implications and predictions. We will then follow the chronology of experimental studies over the years and present the theoretical developments as they pertain to experiments, directed at sets of issues. These topics will include theoretical and experimental examination of the spin properties at 5/2; is the state spin polarized? What properties of the higher Landau levels promote development of the 5/2 state, what other correlation effects are observed there, and what are their interactions with the 5/2 state? The 5/2 state is not a robust example of the fractional quantum Hall effect: what experimental and material developments have allowed enhancement of the effect? Theoretical developments from initial pictures have promoted the possibility that 5/2 excitations are exceptional; do they obey non-abelian statistics? The proposed experiments to determine this and their executions in various forms will be presented: this is the heart of this review. Experimental examination of the 5/2 excitations through interference measurements will be reviewed in some detail, focusing on recent results that demonstrate consistency with the picture of non-abelian charges. The implications of this in the more general physics picture is that the 5/2 excitations, shown to be non-abelian, should exhibit the properties of Majorana operators. This will be the topic of the last review section. Willett, R. L. 2013-07-01 418 NASA Technical Reports Server (NTRS) Starting directly from the microscopic Hamiltonian, a field-theory model is derived for the fractional quantum Hall effect. By considering an approximate coarse-grained version of the same model, a Landau-Ginzburg theory similar to that of Girvin (1986) is constructed. The partition function of the model exhibits cusps as a function of density. It is shown that the collective density fluctuations are massive. Zhang, S. C.; Hansson, T. H.; Kivelson, S. 1989-01-01 419 SciTech Connect We study strong correlation effects in topological insulators via the Lanczos algorithm, which we utilize to calculate the exact many-particle ground-state wave function and its topological properties. We analyze the simple, noninteracting Haldane model on a honeycomb lattice with known topological properties and demonstrate that these properties are already evident in small clusters. Next, we consider interacting fermions by introducing repulsive nearest-neighbor interactions. A first-order quantum phase transition was discovered at finite interaction strength between the topological band insulator and a topologically trivial Mott insulating phase by use of the fidelity metric and the charge-density-wave structure factor. We construct the phase diagram at T=0 as a function of the interaction strength and the complex phase for the next-nearest-neighbor hoppings. Finally, we consider the Haldane model with interacting hard-core bosons, where no evidence for a topological phase is observed. An important general conclusion of our work is that despite the intrinsic nonlocality of topological phases their key topological properties manifest themselves already in small systems and therefore can be studied numerically via exact diagonalization and observed experimentally, e.g., with trapped ions and cold atoms in optical lattices. Varney, Christopher N. [Department of Physics, Georgetown University, Washington, DC 20057 (United States); Joint Quantum Institute and Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Sun Kai; Galitski, Victor [Joint Quantum Institute and Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Condensed Matter Theory Center, Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Rigol, Marcos [Department of Physics, Georgetown University, Washington, DC 20057 (United States) 2010-09-15 420 In thermal leptogenesis, the cosmic matter antimatter asymmetry is produced by CP violation in the decays N\\to \\ell+\\Phi of heavy right-handed Majorana neutrinos N into ordinary leptons \\ell and Higgs particles ?. If some charged-lepton Yukawa couplings are in equilibrium during the leptogenesis epoch, the \\ell interactions with the background medium are flavour-sensitive and the coherence of their flavour content defined by N\\to \\ell+\\Phi is destroyed, modifying the efficiency of the inverse decays. We point out, however, that it is not enough that the flavour-sensitive processes are fast on the cosmic expansion timescale, they must be fast relative to the N\\leftrightarrow \\ell+\\Phi reactions lest the flavour amplitudes of \\ell remain frozen by the repeated N\\leftrightarrow \\ell+\\Phi 'measurements'. Our more restrictive requirement is significant in the most interesting 'strong wash-out case' where N\\leftrightarrow \\ell+\\Phi is fast relative to the cosmic expansion rate. We derive conditions for the unflavoured treatment to be adequate and for flavour effects to be maximal. In this 'fully flavoured regime' a neutrino mass bound survives. To decide if this bound can be circumvented in the intermediate case, a full quantum kinetic treatment is required. Blanchet, S.; Di Bari, P.; Raffelt, G. G. 2007-03-01 421 We investigate the electronic structure and the quantum Hall effect in twisted bilayer graphenes with various rotation angles in the presence of magnetic field. Using a low-energy approximation, which incorporates the rigorous interlayer interaction, we computed the energy spectrum and the quantized Hall conductivity in a wide range of magnetic field from the semiclassical regime to the fractal spectrum regime. In weak magnetic fields, the low-energy conduction band is quantized into electronlike and holelike Landau levels at energies below and above the van Hove singularity, respectively, and the Hall conductivity sharply drops from positive to negative when the Fermi energy goes through the transition point. In increasing magnetic field, the spectrum gradually evolves into a fractal band structure called Hofstadter's butterfly, where the Hall conductivity exhibits a nonmonotonic behavior as a function of Fermi energy. The typical electron density and magnetic field amplitude characterizing the spectrum monotonically decrease as the rotation angle is reduced, indicating that the rich electronic structure may be observed in a moderate condition. Moon, Pilkyung; Koshino, Mikito 2012-05-01 422 Development of quantum electrodynamical (QED) cascades in a standing electromagnetic wave for circular and linear polarizations is simulated numerically with a 3D PIC-MC code. It is demonstrated that for the same laser energy the number of particles produced in a circularly polarized field is greater than in a linearly polarized field, though the acquiring mean energy per particle is larger in the latter case. The qualitative model of laser-assisted QED cascades is extended by including the effect of polarization of the field. It turns out that cascade dynamics is notably more complicated in the case of linearly polarized field, where separation into the qualitatively different "electric" and "magnetic" regions (where the electric field is stronger than the magnetic field and vice versa) becomes essential. In the "magnetic" regions, acceleration is suppressed, and moreover the high-energy electrons are even getting cooled by photon emission. The volumes of the "electric" and "magnetic" regions evolve periodically in time and so does the cascade growth rate. In contrast to the linear polarization, the charged particles can be accelerated by circularly polarized wave even in "magnetic region." The "electric" and "magnetic" regions do not evolve in time, and cascade growth rate almost does not depend on time for circular polarization. Bashmakov, V. F.; Nerush, E. N.; Kostyukov, I. Yu.; Fedotov, A. M.; Narozhny, N. B. 2014-01-01 423 A variational ground state for insulating bilayer graphene (BLG), subject to quantizing magnetic fields, is proposed. Due to the Zeeman coupling, the layer antiferromagnet (LAF) order parameter in fully gapped BLG gets projected onto the spin easy plane, and simultaneously a ferromagnet order, which can further be enhanced by exchange interaction, develops in the direction of the magnetic field. The activation gap for the ? =0 Hall state then displays a crossover from quadratic to linear scaling with the magnetic field, as it gets stronger, and I obtain excellent agreement with a number of recent experiments with realistic strengths for the ferromagnetic interaction. A component of the LAF order, parallel to the external magnetic field, gives birth to additional incompressible Hall states at filling ? =±2, whereas the remote hopping in BLG yields ? =±1 Hall states. Evolution of the LAF order in tilted magnetic fields, scaling of the gap at ? =2, the effect of external electric fields on various Hall plateaus, and different possible hierarchies of fractional quantum Hall states are highlighted. Roy, Bitan 2014-05-01 424 We present a numerical study of the quantum Hall effect in modulated two-dimensional (2D) electron systems in presence of disorder. Theoretically, it is known that a 2D periodic potential in a strong magnetic field gives rise to a recursive subband structure in Landau levels, which is called the Hofstadter butterfly[1]. Recently, the nonmonotonic behavior of the Hall conductivity peculiar to this system was observed in lateral superlattices patterned on GaAs/AlGaAs heterostructures [2,3]. To study how the Hall plateau emerges in those split Landau levels, we numerically calculate the Hall conductivity in a disordered 2D electron system with weak modulations under various magnetic fields. We investigate the scaling property of the Hall conductivity as well as the localization length, to identify the critical energies where the extended states exist. The dependence on the field amplitudes and the Landau levels is also discussed. [1] D. R. Hofstadter, Phys. Rev. B 14, 2239 (1976). [2] C. Albrecht, et al. Phys. Rev. Lett. 86, 147 (2001) [3] M. C. Geisler, et al., Phys. Rev. Lett. 92, 256801 (2004). Koshino, Mikito; Ando, Tsuneya 2005-03-01 425 The exact modeling of external and internal perturbations acting on spacecraft becomes increasingly important as the scientific requirements become more demanding. Disturbance models included in orbit determination and propagation tools need to be improved to account for the needed accuracy. The simulation of perturbations which are caused by thermal effects are particularly challenging because the optical properties of spacecraft surfaces can change during the mission due to exposure to the space environment. At ZARM (Center of Applied Space Technology and Microgravity) algorithms for the simulation and analysis of thermal perturbations have been developed. These codes include the simulation of the thermal recoil force (waste heat dissipation), Earth Albedo influence as well as Solar radiation pressure. The applied methods are based on the inclusion of the actual spacecraft geometry by means of Finite Element (FE) models in the calculation of the disturbance forces. Thus the modeling accuracy is increased considerably and also housekeeping and sensor data can be included in the calculations. As an application for the developed method a test case model of the Pioneer 11 radio isotopic thermal generators is presented. For accurate thermal modeling the knowledge of optical surface properties and their change during the mission of a spacecraft is crucial. Looking at material behaviour in space, in-situ experiments are indispensable because in ground tests space environment can be simulated only partially. At ZARM a dedicated nano satellite concept has been developed which enables the low cost and repeatable observation of material behaviour in space. The concept consists of a bus cube in the center of the satellite and two experiment cubes attached at the sides which include thermal sensors for the direct measurement of the thermo-optical properties of different materials. The design and the mission will be presented and the impact on thermal modeling will be discussed. Rievers, Benny; Lämmerzahl, C. 2009-05-01 426 SciTech Connect The influence of finite Larmor radius (FLR) effects on the Jeans instability of infinitely conducting homogeneous quantum plasma is investigated. The quantum magnetohydrodynamic (QMHD) model is used to formulate the problem. The contribution of FLR is incorporated to the QMHD set of equations in the present analysis. The general dispersion relation is obtained analytically using the normal mode analysis technique which is modified due to the contribution of FLR corrections. From general dispersion relation, the condition of instability is obtained and it is found that Jeans condition is modified due to quantum effect. The general dispersion relation is reduced for both transverse and longitudinal mode of propagations. The condition of gravitational instability is modified due to the presence of both FLR and quantum corrections in the transverse mode of propagation. In longitudinal case, it is found to be unaffected by the FLR effects but modified due to the quantum corrections. The growth rate of Jeans instability is discussed numerically for various values of quantum and FLR corrections of the medium. It is found that the quantum parameter and FLR effects have stabilizing influence on the growth rate of instability of the system. Sharma, Prerana [Physics Department, Ujjain Engineering College, Ujjain, Madhya Pradesh 465010 (India)] [Physics Department, Ujjain Engineering College, Ujjain, Madhya Pradesh 465010 (India); Chhajlani, R. K. [School of Studies in Physics, Vikram University Ujjain, Madhya Pradesh 465010 (India)] [School of Studies in Physics, Vikram University Ujjain, Madhya Pradesh 465010 (India) 2013-09-15 427 The currently accepted model for quantum interference resulting from the emission of electron waves from two scattering centers induced by either light or charged particle impact is analogous to Young's emission of two light waves from two slits. In this work we show that this simple classical wave model is incomplete and that there is a more complicated quantum interference pattern for low energy ionization caused by electron impact. Ozer, Z. N.; Chaluvadi, H.; Ulu, M.; Dogan, M.; Aktas, B.; Madison, D. 2014-04-01 428 The problem of boundary conditions in a supersymmetric theory of quantum cosmology is studied, with application to the one-loop prefactor in the quantum amplitude. Our background cosmological model is flat Euclidean space bounded by a three-sphere, and our calculations are based on the generalized Riemann zeta-function. One possible set of supersymmetric local boundary conditions involves field strengths for spins 1, Peter D. D'Eath; Giampiero Esposito 1995-01-01 429 Although devices working on quantum principles can revolutionize the electronic industry, they have not been achieved yet as it is difficult to control their stability. We show that one can use evanescent modes to build stable quantum switches. The physical principles that make this possible is explained in detail. Demonstrations are given using a multichannel Aharonov-Bohm interferometer. We propose a new S matrix for multichannel junctions to solve the scattering problem. Mukherjee, Sreemoyee; Yadav, Ashutosh; Singha Deo, P. 2013-01-01 430 We discuss violations of CPT and quantum mechanics due to interactions of neutrinos with space-time quantum foam. Neutrinoless\\u000a double beta decay and oscillations of neutrinos from astrophysical sources (supernovae, active galactic nuclei) are analysed.\\u000a It is found that the propagation distance is the crucial quantity entering any bounds on EHNS parameters. Thus, while the\\u000a bounds from neutrinoless double beta decay H. V. Klapdor-Kleingrothaus; H. Päs; U. Sarkar 2000-01-01 431 Shell phenomena in small quantum dots with a few electrons under a perpendicular magnetic field are discussed within a simple\\u000a model. It is shown that various kinds of shell structures, which occur at specific values for the magnetic field lead to a\\u000a disappearance of the orbital magnetization for particular magic numbers for noninteracting electrons in small quantum dots.\\u000a Including the R. G. Nazmitdinov 2009-01-01 432 PubMed We present an optimized hierarchical equations of motion theory for quantum dissipation in multiple Brownian oscillators bath environment, followed by a mechanistic study on a model donor-bridge-acceptor system. We show that the optimal hierarchy construction, via the memory-frequency decomposition for any specified Brownian oscillators bath, is generally achievable through a universal pre-screening search. The algorithm goes by identifying the candidates for the best be just some selected Padé spectrum decomposition based schemes, together with a priori accuracy control criterions on the sole approximation, the white-noise residue ansatz, involved in the hierarchical construction. Beside the universal screening search, we also analytically identify the best for the case of Drude dissipation and that for the Brownian oscillators environment without strongly underdamped bath vibrations. For the mechanistic study, we quantify the quantum nature of bath influence and further address the issue of localization versus delocalization. Proposed are a reduced system entropy measure and a state-resolved constructive versus destructive interference measure. Their performances on quantifying the correlated system-environment coherence are exemplified in conjunction with the optimized hierarchical equations of motion evaluation of the model system dynamics, at some representing bath parameters and temperatures. Analysis also reveals the localization to delocalization transition as temperature decreases. PMID:22713032 Ding, Jin-Jin; Xu, Rui-Xue; Yan, YiJing 2012-06-14 433 We present an optimized hierarchical equations of motion theory for quantum dissipation in multiple Brownian oscillators bath environment, followed by a mechanistic study on a model donor-bridge-acceptor system. We show that the optimal hierarchy construction, via the memory-frequency decomposition for any specified Brownian oscillators bath, is generally achievable through a universal pre-screening search. The algorithm goes by identifying the candidates for the best be just some selected Padé spectrum decomposition based schemes, together with a priori accuracy control criterions on the sole approximation, the white-noise residue ansatz, involved in the hierarchical construction. Beside the universal screening search, we also analytically identify the best for the case of Drude dissipation and that for the Brownian oscillators environment without strongly underdamped bath vibrations. For the mechanistic study, we quantify the quantum nature of bath influence and further address the issue of localization versus delocalization. Proposed are a reduced system entropy measure and a state-resolved constructive versus destructive interference measure. Their performances on quantifying the correlated system-environment coherence are exemplified in conjunction with the optimized hierarchical equations of motion evaluation of the model system dynamics, at some representing bath parameters and temperatures. Analysis also reveals the localization to delocalization transition as temperature decreases. Ding, Jin-Jin; Xu, Rui-Xue; Yan, YiJing 2012-06-01 434 National Technical Information Service (NTIS) Plateau formation in the fractional quantum Hall effect is shown to arise because, by pinning of vortices in the incompressible electron liquid, the canonical filling factor can be stationarily maintained in the interconnected region between the vortices.... H. Bruus O. P. Hansen E. B. Hansen 1988-01-01 435 Quantum ion-acoustic solitary waves are studied by considering the effects of exchange and correlation for the electrons. Starting from one-dimensional quantum hydrodynamic equations, including the term of exchange correlation for electrons, we obtain a model in which two dimensionless parameters appear (in addition to the parameter measuring the quantum diffraction) measuring the exchange and the correlation. A new deformed Korteweg-de Vries equation is derived. The effect of exchange and correlation is reflected in the phase speed as well as in the nonlinear and dispersion terms. Its solution shows that the exchange-correlation effects modify the amplitude as well as the width of the weak solitary waves. In the arbitrary amplitude regime, and as may be expected, a pseudopotential analysis shows that the exchange-correlation effects may change the nature (compressive or rarefactive) of the quantum ion-acoustic solitary waves. Our results complement and give new insight into the previously published work on this problem. Ourabah, Kamel; Tribeche, Mouloud 2013-10-01 436 We studied electro-optical effects in 2D quantum confined CdSe nanoplatelets synthesized by colloidal chemistry. They were incorporated into transparent polymeric film sandwiched between two ITO electrodes to which the electric potential has been applied. The electro-optical response in the nanoplatelets has a Stark-like character similar to observed elsewhere for CdSe quantum dots and nanorods. However, the magnitude of the Stark effect in the platelets is of the order of magnitude higher than that in quantum dots or nanorods of an equivalent diameter. The electro-optical response from the nanoplatelets is partially polarized. Artemyev, M. V.; Prudnikau, A. V.; Ermolenko, M. V.; Gurinovich, L. I.; Gaponenko, S. V. 2013-05-01 437 Magnetic-sensitive radical-ion-pair reactions are understood to underlie the\\u000abiochemical magnetic compass used by avian species for navigation.\\u000aRadical-ion-pair reactions were recently shown to manifest a host of\\u000aquantum-information-science effects, like quantum jumps and the quantum Zeno\\u000aeffect. We here show that the quantum Zeno effect immunizes the magnetic and\\u000aangular sensitivity of the avian compass mechanism against the deleterious and I. K. Kominis 2009-01-01 438 Quantum-dot nanocrystals have particular optical properties due to the quantum confinement effect and the surface effect. This study focuses on the effect of surface conditions on the emission from quantum dots. The quantum dots prepared with 1-hexadecylamine (HDA) in the synthesis show strong emission while the quantum dots prepared without HDA show weak emission, as well as emission from surface energy traps. The comparison of the X-ray patterns of these two sets of quantum dots reveals that HDA forms a layer on the surface of quantum dot during the synthesis. This surface passivation with a layer of HDA reduces surface energy traps, therefore the emission from surface trap levels is suppressed in the quantum dots synthesized with HDA. Lee, Jae-Won; Yang, Ho-Soon; Hong, K. S.; Kim, S. M. 2013-12-01 439 We present results on two approaches for placing II-VI quantum dots in resonance with a pillar microcavity. The first approach consists in growing a fully epitaxial structure: a ZnTe 3/2 cavity containing CdTe quantum dots sandwiched between two CdMgTe/CdZnMgTe distributed Bragg reflectors. We observed a strong enhancement of the emission intensity for a dot well located into a 0.9 ?m diameter pillars. More striking results were obtained using CdSe QDs in a /2 ZnSe cavity sandwiched between SiO2/TiO2 Bragg reflectors. We probed the Purcell effect by time-resolved photoluminescence and intensity saturation measurements performed on single quantum dots located in a 1.1 ?m diameter hybrid pillar. A four-fold enhancement of quantum dot spontaneous emission rate is observed for quantum dots in resonance with excited degenerated modes of the pillar. Robin, I. C.; André, R.; Balocchi, A.; Carayon, S.; Gérard, J. M.; Kheng, K.; Dang, Le Si; Mariette, H.; Moehl, Sebastien; Tinjod, Frank 2005-11-01 440
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683918118476868, "perplexity": 1834.4166435474738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380627.44/warc/CC-MAIN-20141119123300-00167-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/170389-subspaces.html
Thread: Subspaces 1. Subspaces I need to determine whether U is a subspace of V. If it is not a subspace, I need to state a condition that fails and give a counter example showing that that condition fails. a) V is the space of all differentiable functions R->R , and U is the set of differentiable functions whose derivative at 0 takes the value 1. b) V is the space of all polynomials with real coefficients, viewed as functions R->R, and U is the set of all differentiable functions R->R. c) V=R^4, and U= {(a, ab, b, c) belonging to R^4, a,b,c belong to R} I know to prove that these are subspaces of V I need to prove that it is closed under scalar multiplication and addition, but I'm not sure how to write them out. 2. Originally Posted by skittle I need to determine whether U is a subspace of V. If it is not a subspace, I need to state a condition that fails and give a counter example showing that that condition fails. a) V is the space of all differentiable functions R->R , and U is the set of differentiable functions whose derivative at 0 takes the value 1. b) V is the space of all polynomials with real coefficients, viewed as functions R->R, and U is the set of all differentiable functions R->R. c) V=R^4, and U= {(a, ab, b, c) belonging to R^4, a,b,c belong to R} I know to prove that these are subspaces of V I need to prove that it is closed under scalar multiplication and addition, but I'm not sure how to write them out. What have you tried? for 1) let $f,g \in U$ what is $f'(0)+g'(0)$ and what does this tell you for 2 just verify what you need for a subspace a polynomial can be written as $\displaystyle p(x)=\sum_{k=0}^{n}a_kx^k$ for 3 try adding two vectors of that form and see if there result is of the correct form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946998655796051, "perplexity": 225.46776841752398}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541324.73/warc/CC-MAIN-20161202170901-00178-ip-10-31-129-80.ec2.internal.warc.gz"}
https://goby.software/3.0/md_doc100_acomms.html
Goby3  3.0.12 2022.09.26 goby-acomms: An overview # Overview ## Analogy to established networking systems To start on some (hopefully) common ground, let's begin with an analogy to Open Systems Initiative (OSI) networking layers in this table. For a complete description of the OSI layers see http://www.itu.int/rec/T-REC-X.200-199407-I/en. OSI Layer Goby-Acomms library component API class(es) Example(s) Application N/A gobyd Presentation DCCL dccl::Codec Session No sessions Transport queue goby::acomms::QueueManager queue_simple.cpp chat.cpp Network Does not yet exist. Data Link driver subclasses of goby::acomms::ModemDriverBase, e.g. goby::acomms::MMDriver driver_simple.cpp chat.cpp amac goby::acomms::MACManager amac_simple.cpp chat.cpp Physical Not part of Goby Modem Firmware, e.g. WHOI Micro-Modem Firmware (NMEA 0183 on RS-232) (see Interface Guide) ## Acoustic Communications are slow Do not take the OSI mapping too literally; some things we are doing here for acoustic communications (hereafter, acomms) are unconventional from the approach of networking on electromagnetic carriers (hereafter, EM networking). The difference is a vast spread in the expected throughput of a standard internet hardware carrier and acoustic communications. For example, an optical fiber can put through greater than 10 Tbps over greater than 100 km, whereas the WHOI acoustic Micro-Modem can (at best) do 5000 bps over several km. This is a difference of thirteen orders of magnitude for the bit-rate distance product! ## Efficiency to make messages small is good Extremely low throughput means that essentially every efficiency in bit packing messages to the smallest size possible is desirable. The traditional approach of layering (e.g. TCP/IP) creates inefficiencies as each layer wraps the message of the higher layer with its own header. See RFC3439 section 3 ("Layering Considered Harmful") for an interesting discussion of this issue http://tools.ietf.org/html/rfc3439#page-7. Thus, the "layers" of goby-acomms are more tightly interrelated than TCP/IP, for example. Higher layers depend on lower layers to carry out functions such as error checking and do not replicate this functionality. ## Total throughput unrealistic: prioritize data The second major difference stemming from this bandwidth constraint is that total throughput is often an unrealistic goal. The quality of the acoustic channel varies widely from place to place, and even from hour to hour as changes in the sea affect propagation of sound. This means that it is also difficult to predict what one's throughput will be at any given time. These two considerations manifest themselves in the goby-acomms design as a priority based queuing system for the transport layer. Messages are placed in different queues based on their priority (which is determined by the designer of the message). This means that the channel is always utilized (low priority data are sent when the channel quality is high) but important messages are not swamped by low priority data. In contrast, TCP/IP considers all packets equally. Packets made from a spam email are given the same consideration as a high priority email from the President. This is a trade-off in efficiency versus simplicity that makes sense for EM networking, but does not for acoustic communications. ## Despite all this, simplicity is good The "law of diminishing returns" means that at some point, if we try to optimize excessively, we will end up making the system more complex without substantial gain. Thus, goby-acomms makes some concessions for the sake of simplicity: • Numerical message fields are bounded by powers of 10, rather than 2. Humans deal much better with decimal than binary. • User data packetizing (and subsequent unpacketizing) is not done. This is a key component of TCP/IP, but with the number of dropped packets one can expect with acomms, at the moment this does not seem like a good idea. The user is expected to provide data that is smaller or equal to the packet size of the physical layer (e.g. 32 - 256 bytes for the WHOI Micro-Modem). ## Component model A relatively simple component model for the goby-acomms library showing the interface classes: # dccl: Encoding and decoding The Dynamic Compact Control Language (DCCL) provides a structure for defining messages to be sent through an acoustic modem. The messages are configured in Google Protocol Buffers and are intended to be easily reconfigurable, unlike the original CCL framework used in the REMUS vehicles and others (for information on CCL, see http://acomms.whoi.edu/ccl/. Unlike the encoder / decoder provided with Google Protocol Buffers, each field (which could be a primitive type like double, int32, string or an user-defined embedded message like CTDMessage) of a DCCL message can be encoded using a DCCL built-in or user-defined encoder. This allows the codecs to be matched to the data's physical origins and thus make the most of the limited throughput available by making very small encoded messages. DCCL is now a standalone library that can be used with or without Goby. See http://libdccl.org for detailed documentation. # queue: Priority based message queuing The goby-acomms queuing (queue) component interacts with both the application level process and the modem driver process that talks directly to the modem. On the application side, queue provides the ability for the application level process to push DCCL messages to various queues and receive messages from a remote sender that correspond to messages in the same queue (e.g. you have a queue for STATUS_MESSAGE that you can push messages to you and also receive other STATUS_MESSAGEs on). The push feature is called by the application level process and received messages are signaled to all previous bound slots (see signal_slot). On the driver side, queue provides the modem driver with data upon request. It chooses the data to send based on dynamic priorities (and several other configuration parameters). It will also pack as many messages from the user into a single frame from the modem as possible using the DCCLCodec's repeated encoding functionality. Note, however, that queue will not split a user's data into frames (like TCP/IP). If this functionality is desired, it must be provided at the application layer. Acoustic communications are too unpredictable to reliably stitch together frames. Detailed documentation for queue # modemdriver: Modem driver The goby-acomms Modem driver component (modemdriver) of the Goby-Acomms library provides an interface from the rest of goby-acomms to the acoustic modem firmware. While currently the only driver publicly available is for the WHOI Micro-Modem (and for an example toy modem "ABCDriver"), this component is written in such a way that drivers for any acoustic modem that interfaces over a serial or TCP connection and can provide (or provide abstractions for) sending data directly to another modem on the link should be able to be written. Contributions of a modem driver for another acoustic modem are highly welcome. Detailed documentation for modemdriver # amac: Medium Access Control (MAC) The goby-acomms MAC component (amac) handles access to the shared medium, in our case the acoustic channel. We assume that we have a single (frequency) band for transmission so that if vehicles transmit simultaneously, collisions will occur between messaging. Therefore, we use time division multiple access (TDMA) schemes, or "slotting". Networks with multiple frequency bands will have to employ a different MAC scheme or augment amac for the frequency division multiple access (FDMA) scenario. The Goby AMAC provides two basic types of TDMA: • Decentralized: Each node initiates its own transaction at the appropriate time in the TDMA cycle. This requires reasonably well synchronized clocks (any skew must be included in the time of the slot as a guard, so skews of less than 0.1 seconds are generally acceptable.). • Centralized (also called "polling"): For legacy support, "polling" is also provided. This is a TDMA enforced by a central computer (the "poller"). The "poller" sends a request for data from a list of nodes in sequential order. The advantage of polling is that synchronous clocks are not needed and the MAC scheme can be changed on short notice by the topside operator. Clearly this only works with modem hardware capable of third-party mediation of transmission (such as the WHOI Micro-Modem). Detailed documentation for amac # Software concepts used in goby-acomms ## Signal / Slot model for asynchronous events The layers of goby-acomms use a signal / slot system for asynchronous events such as receipt of an acoustic message. Each signal can be connected (goby::acomms::connect()) to one or more slots, which are functions or member functions matching the signature of the signal. When the signal is emitted, the slots are called in order they were connected. To ensure synchronous behavior and thread-safety throughout goby-acomms, signals are only emitted during a call to a given component's API class do_work method (i.e. goby::acomms::ModemDriverBase::do_work(), goby::acomms::QueueManager::do_work(), goby::acomms::MACManager::do_work()). For example, if I want to receive data from queue, I need to connect to the signal QueueManager::signal_receive. Thus, I need to define a function or class method with the same signature: At startup, I then connect the signal to the slot: If instead, I was using a member function such as class MyApplication { public: }; I would call connect (probably in the constructor for MyApplication) passing the pointer to the class: MyApplication::MyApplication() { } The Boost.Signals2 library is used without modification, so for details see https://www.boost.org/doc/libs/1_58_0/doc/html/signals2.html. Google Protocol Buffers are used as a convenient way of generating data structures (basic classes with accessors, mutators). They can also be serialized efficiently, though this is not generally used within goby-acomms. Protocol buffers messages are defined in .proto files that have a C-like syntax: message MyMessage { optional uint32 a = 1; required string b = 2; repeated double c = 3; } The identifier "optional" means a proper MyMessage object may or may not contain that field. "required" means that a proper MyMessage always contains such a field. "repeated" means a MyMessage can contain a vector of this field (0 to n entries). The sequence number "= 1" must be unique for each field and determines the serialized format on the wire. For our purposes it is otherwise insignificant. See https://developers.google.com/protocol-buffers/docs/proto for full details. The .proto file is pre-compiled into a C++ class that is loosely speaking (see https://developers.google.com/protocol-buffers/docs/reference/cpp-generated for precise details): { public: MyMessage (); // set void set_a(unsigned a); void set_b(const std::string& b); // get unsigned a(); std::string b(); double c(int index); const RepeatedField<double>& c(); // RepeatedField ~= std::vector // has bool has_a(); bool has_b(); int c_size(); // clear void clear_a(); void clear_b(); void clear_c(); private: unsigned a_; std::string b_; RepeatedField<double> c_; // RepeatedField ~= std::vector } Clearly the .proto representation is more compact and amenable to easy modification. All the Protocol Buffers messages used in goby-acomms are placed in the goby::acomms::protobuf namespace for easy identification. This doxygen documentation does not understand Protocol Buffers language so you will need to look at the source code directly for the .proto (e.g. acomms_modem_message.proto). # UML models Model that gives the sequence for sending a message with goby-acomms (using the : Model that shows the commands needed to start and keep goby-acomms running:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3886714279651642, "perplexity": 3459.2602269738263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00594.warc.gz"}
http://mathhelpforum.com/algebra/185323-help-simplifying-expression-exponents-i-think-may-incorrect.html
# Math Help - help simplifying an expression with exponents (I think the may be incorrect) 1. ## help simplifying an expression with exponents (I think book may be incorrect) Below are pictures of the problem in the book and how I simplified it. (2a^2 b)(-3a^-2 b^3)^2 The books answer is -6b^7. I can't figure out how to get this answer. Is the book wrong? My answer is (18b^7)/a^2. Is my answer correct? Attached Thumbnails 2. ## Re: help simplifying an expression with exponents (I think the may be incorrect) Originally Posted by StudentMCCS The books answer is -6b^7. I can't figure out how to get this answer. Is the book wrong? Is my answer correct? 1. If (and only if) you have copied the question correctly your answer is OK. But ... 2. The answer in the book belongs to the question $(2a^{2} b)(-3a^{-2}(b^3)^2)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229427933692932, "perplexity": 645.5564562413698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646249598.96/warc/CC-MAIN-20150827033049-00070-ip-10-171-96-226.ec2.internal.warc.gz"}
https://publi2-as.oma.be/record/5875?ln=en
2022 Ref: SCART-2022-0107 Tidal Constraints on the Martian Interior published in Journal of Geophysical Research: Planets, 127 issue 11 (2022) Abstract: We compare several recent Martian interior models and evaluate how these are impacted by the tidal constraints provided by the Love number k2 and the secular acceleration in longitude s of its main moon, Phobos. The expression of the latter is developed up to harmonic degree 5 to match the accuracy of the current observations. We match a number of current interior structure models to the recent measurements of the tidal parameters and derive estimations of the possible core radius, temperature profile, and attenuation in the Martian interior. Our estimation of the core radius is 1,820 $\pm$ 80 km, consistent with recent seismic measurements. The attenuation profiles in the Martian interior at the main tidal period of Phobos are similar between the considered models, giving a range for the degree-2 bulk tidal attenuation Q2 = 93.0 $\pm$ 8.40 but diverge at seismic frequencies. At seismic frequencies, model shear attenuation Qμ ranges between 100 and 4,000 in the lower mantle, so that a measurement of seismic shear attenuation could be used as an effective means for distinguishing between the models considered. Other constraints such as elastic lithosphere thickness and Chandler Wobble period favor a thicker elastic lithosphere and models with a frequency dependence α of the shear attenuation between 0.15 and 0.4. Improved constraints on the Martian interior should be possible with additional seismic and radio observations from the InSight mission. DOI: 10.1029/2022JE007291 Funding: 3PRODPLANINT/3PRODPLANINT/3PRODPLANINT The record appears in these collections: Royal Observatory of Belgium > Reference Systems & Planetology Science Articles > Peer Reviewed Articles
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7099471688270569, "perplexity": 2607.2612947363814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00858.warc.gz"}
https://www.albert.io/ie/abstract-algebra/reducible-groups
Free Version Moderate # Reducible Groups ABSALG-NVXWCK We call a group $G$ reducible if $G$ is isomorphic to a product of two non-trivial groups. Which of the following groups are reducible? Select ALL that apply. A $\mathbb{Z}_8$ B $S_3$ C $\mathbb{Z}_2\times \mathbb{Z}_4$ D $\mathbb{Z}_12$ E $D_4$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4210275709629059, "perplexity": 1468.3553873098276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171900.13/warc/CC-MAIN-20170219104611-00153-ip-10-171-10-108.ec2.internal.warc.gz"}
https://brilliant.org/problems/most-outstanding-prime/
# Most outstanding prime I have 4 distinct prime numbers. The sum of these 4 numbers is also a prime number. The product of these 4 numbers is a/an $\text{\_\_\_\_\_\_\_\_\_\_\_}$ number. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779997706413269, "perplexity": 839.0746581835053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989756.81/warc/CC-MAIN-20210518063944-20210518093944-00196.warc.gz"}
http://www.newworldencyclopedia.org/entry/Distance
# Distance Distance is a numerical description of the separation between objects or points at a given moment in time. In physics or everyday discussion, distance may refer to a physical length or period of time. Occasionally, it is expressed in approximate units, such as "two counties over." In mathematics, however, distance must meet rigorous criteria. In most cases, the expression "distance from A to B" is interchangeable with "distance between A and B." Distances can be measured by various techniques. Accurate distance measurements are important for various fields of work, such as surveying, aircraft navigation, and astronomy. ## Distance, length, and displacement Distance along a path compared with displacement. It is important to clarify how the terms length and displacement are related to distance, and how they differ. The term length usually refers to the longest (or longer) dimension of an object (or area or system), measured as the distance between two ends or sides of the object. Thus, length is generally restricted to a given object's spatial dimensions, whereas distance often refers to the extent of separation between objects (or systems). If a person, animal, vehicle, or some object travels from point A to point B, the shortest distance between A and B is known as displacement, but the distance covered may be much greater than the displacement. If points A and B coincide, the displacement is zero, but the distance covered is not. Moreover, displacement is a vector quantity, containing both magnitude and direction. By contrast, distance is a scalar quantity, expressing only magnitude. Thus, distance cannot be a negative number. ## Units of distance In the physical sciences and engineering, units of distance are the same as units of length. These units may be based on lengths of human body parts, the distance traveled in a certain number of paces, the distance between landmarks or places on the Earth, or the length of some arbitrarily chosen object. In the International System of Units (SI), the basic unit of length is the meter, which is now defined in terms of the speed of light. The centimeter and the kilometer, derived from the meter, are also commonly used units. In U.S. customary units, English or Imperial system of units, units of length in common usage are the inch, the foot, the yard, and the mile. Units used to denote distances in the vastness of space, as in astronomy, are much longer than those typically used on Earth. They include the astronomical unit, the light-year, and the parsec. To define microscopically small distances, as in chemistry and microbiology, units used include the micron (or micrometer) and the ångström. ## Measurement of distance Various techniques have been developed for the measurement of length or distance. For fairly short lengths and distances, a person may use a ruler or measuring tape. For longer distances traveled by a vehicle, the odometer is useful. Some methods rely on a mathematical approach known as triangulation, which is based on geometric relationships. Various highly sensitive and precise techniques involve the use of lasers.[1] Some laser distance meters measure the "time of flight" of a laser pulse, that is, the time it takes for a laser pulse to travel round-trip between a laser emitter and a target. Advanced laser techniques have been used to find the distance of the Moon from the Earth at an accuracy of a few centimeters. Accurate distance measurements are important for people working in various fields, such as surveying, aircraft navigation, and astronomy. These areas are discussed briefly below. ### Surveying Surveyor at work with a leveling instrument. Surveying is the technique and science of accurately determining the terrestrial or three-dimensional space position of points and the distances and angles between them. These points are usually, but not exclusively, associated with positions on the surface of the Earth. An alternative definition, given by the American Congress on Surveying and Mapping (ACSM), states that surveying is the science and art of making all essential measurements to determine the relative position of points and/or physical and cultural details above, on, or beneath the surface of the Earth, and to depict them in a usable form, or to establish the position of points and/or details. Surveying has been an essential element in the development of the human environment since the beginning of recorded history (about 5000 years ago), and it is a requirement in the planning and execution of nearly every form of construction. Its most familiar modern uses are in the fields of transport, building and construction, communications, mapping, and in defining legal boundaries for land ownership. To accomplish their objective, surveyors use elements of geometry, engineering, trigonometry, mathematics, physics, and law. Distance measuring equipment (DME) at an airport. Distance Measuring Equipment (DME) is a transponder-based radio navigation technology that measures distance by timing the propagation delay of VHF or UHF radio signals. Aircraft pilots use DME to determine their distance from a land-based transponder by sending and receiving pulse pairs—two pulses of fixed duration and separation. The DME system is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter (transponder) on the ground. The aircraft interrogates the ground transponder with a series of pulse-pairs (interrogations), and the ground station replies with an identical sequence of reply pulse-pairs with a precise time delay (typically 50 microseconds). The DME receiver in the aircraft searches for pulse-pairs with the correct time interval between them. The aircraft interrogator locks on to the DME ground station once it understands that the particular pulse sequence is the interrogation sequence it sent out originally. A radio pulse takes around 12.36 microseconds to travel one nautical mile to and from, this is also referred to as a radar-mile. The time difference between interrogation and reply minus the 50 microsecond ground transponder delay is measured by the interrogator's timing circuitry and translated into a distance measurement in nautical miles which is then displayed in the cockpit. ### Astronomy The cosmic distance ladder (also known as the Extragalactic Distance Scale) is the succession of methods by which astronomers determine distances to celestial objects. A direct distance measurement to an astronomical object is only possible for objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances with methods that work at larger distances. The ladder analogy arises because no one technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby-to-intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine distances at the next higher rung. At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question.[2] These direct methods are: • parallax (or triangulation) based upon trigonometry, using precise measurements of angles, similar to what is used in surveying. • light travel time (that is, the constancy of the speed of light), as in radar. Radar can (for practical reasons) only be used within the Solar System. Beyond the use of parallax, the overlapping chain of distance measurement techniques includes the use of cepheid variables, planetary nebulae, most luminous supergiants, most luminous globular clusters, most luminous HII regions, supernovae, and Hubble constant and red shifts.[3] ## Mathematics ### Geometry In neutral geometry, the minimum distance between two points is the length of the line segment between them. In analytic geometry, one can find the distance between two points of the xy-plane using the distance formula. The distance between (x1, y1) and (x2, y2) is given by $d=\sqrt{(\Delta x)^2+(\Delta y)^2}=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}.\,$ Similarly, given points (x1, y1, z1) and (x2, y2, z2) in three-space, the distance between them is $d=\sqrt{(\Delta x)^2+(\Delta y)^2+(\Delta z)^2}=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2}.$ Which is easily proven by constructing a right triangle with a leg on the hypotenuse of another (with the other leg orthogonal to the plane that contains the first triangle) and applying the Pythagorean theorem. In the study of complicated geometries, we call this (most common) type of distance Euclidean distance, as it is derived from the Pythagorean theorem, which does not hold in Non-Euclidean geometries. This distance formula can also be expanded into the arc-length formula. In pseudo code the common distance formula is written like this: square_root( power(x2-x1, 2) + power(y2-y1, 2) ); ### Distance in Euclidean space In the Euclidean space Rn, the distance between two points is usually given by the Euclidean distance (2-norm distance). Other distances, based on other norms, are sometimes used instead. For a point (x1, x2, ...,xn) and a point (y1, y2, ...,yn), the Minkowski distance of order p (p-norm distance) is defined as: 1-norm distance $= \sum_{i=1}^n \left| x_i - y_i \right|$ 2-norm distance $= \left( \sum_{i=1}^n \left| x_i - y_i \right|^2 \right)^{1/2}$ p-norm distance $= \left( \sum_{i=1}^n \left| x_i - y_i \right|^p \right)^{1/p}$ infinity norm distance $= \lim_{p \to \infty} \left( \sum_{i=1}^n \left| x_i - y_i \right|^p \right)^{1/p}$ $= \max \left(|x_1 - y_1|, |x_2 - y_2|, \ldots, |x_n - y_n| \right).$ p need not be an integer, but it cannot be less than 1, because otherwise the triangle inequality does not hold. The 2-norm distance is the Euclidean distance, a generalization of the Pythagorean theorem to more than two coordinates. It is what would be obtained if the distance between two points were measured with a ruler: the "intuitive" idea of distance. The 1-norm distance is more colorfully called the taxicab norm or Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets). The infinity norm distance is also called Chebyshev distance. In 2D it represents the distance kings must travel between two squares on a chessboard. The p-norm is rarely used for values of p other than 1, 2, and infinity, but see super ellipse. In physical space the Euclidean distance is in a way the most natural one, because in this case the length of a rigid body does not change with rotation. ### General case In mathematics, in particular geometry, a distance function on a given set M is a function d: M×M → R, where R denotes the set of real numbers, that satisfies the following conditions: • d(x,y) ≥ 0, and d(x,y) = 0 if and only if x = y. (Distance is positive between two different points, and is zero precisely from a point to itself.) • It is symmetric: d(x,y) = d(y,x). (The distance between x and y is the same in either direction.) • It satisfies the triangle inequality: d(x,z) ≤ d(x,y) + d(y,z). (The distance between two points is the shortest distance along any path). Such a distance function is known as a metric. Together with the set, it makes up a metric space. For example, the usual definition of distance between two real numbers x and y is: d(x,y) = |xy|. This definition satisfies the three conditions above, and corresponds to the standard topology of the real line. But distance on a given set is a definitional choice. Another possible choice is to define: d(x,y) = 0 if x = y, and 1 otherwise. This also defines a metric, but gives a completely different topology, the "discrete topology"; with this definition numbers cannot be arbitrarily close. ### Distances between sets and between a point and a set Various distance definitions are possible between objects. For example, between celestial bodies one should not confuse the surface-to-surface distance and the center-to-center distance. If the former is much less than the latter, as for a LEO, the first tends to be quoted (altitude), otherwise, e.g. for the Earth-Moon distance, the latter. There are two common definitions for the distance between two non-empty subsets of a given set: • One version of distance between two non-empty sets is the infimum of the distances between any two of their respective points, which is the every-day meaning of the word. This is a symmetric prametric. On a collection of sets of which some touch or overlap each other, it is not "separating," because the distance between two different but touching or overlapping sets is zero. Also it is not hemimetric, i.e., the triangle inequality does not hold, except in special cases. Therefore only in special cases this distance makes a collection of sets a metric space. • The Hausdorff distance is the larger of two values, one being the supremum, for a point ranging over one set, of the infimum, for a second point ranging over the other set, of the distance between the points, and the other value being likewise defined but with the roles of the two sets swapped. This distance makes the set of non-empty compact subsets of a metric space itself a metric space. The distance between a point and a set is the infimum of the distances between the point and those in the set. This corresponds to the distance, according to the first-mentioned definition above of the distance between sets, from the set containing only this point to the other set. In terms of this, the definition of the Hausdorff distance can be simplified: it is the larger of two values, one being the supremum, for a point ranging over one set, of the distance between the point and the set, and the other value being likewise defined but with the roles of the two sets swapped. ## Other "distances" • Mahalanobis distance is used in statistics. • Hamming distance is used in coding theory. • Levenshtein distance • Chebyshev distance ## Notes 1. Distance Measurements with Lasers Retrieved June 3, 2008. 2. The precise measurement of stellar positions is part of the discipline of astrometry. 3. Distance Measurement in Astronomy Retrieved June 4, 2008. ## References • Abdi, Hervé. 2007. Distance Encyclopedia of Measurement and Statistics. Neil Salkind, ed. Thousand Oaks, CA: Sage. Retrieved October 12, 2007. • Deza, Elena, and Michel Marie Deza. 2006. Dictionary of Distances. Amsterdam, The Netherlands: Elsevier. ISBN 9780444520876 • Figliola, R. S., and Donald E. Beasley. 2006. Theory and Design for Mechanical Measurements, 4th ed. Hoboken, NJ: John Wiley. ISBN 978-0471445937 • Webb, Stephen. 1999. Measuring the Universe: The Cosmological Distance Ladder. Springer-Praxis Series in Astronomy and Astrophysics. London: Springer. ISBN 1852331062
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132323861122131, "perplexity": 698.7962239059109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00273.warc.gz"}
http://coertvonk.com/category/sw/hp41
Scientific calculator software for number base conversion for HP-41. Source PPC ROM, December 1981. Scientific calculator software for curve fit for HP-41. Computes curve to a set of data points (linear, exponential, logarithmic and power). Program to approximates the first derivative of a function at a point. The step size can be provided by the user or can be automatically determined. Scientific calculator software to integrate on HP-41. Does not require step-size. Source PPC ROM, December 1981. This program transforms a matrix into row reduced echelon form. This means the program will calculate determinants and inverses and will solve systems of equations. Scientific calculator software to calculate discrete fourier transform on HP-41. Written in Focal in 1987. Scientific calculator software to add hyperbolic operations on HP-41. Written in Focal in 1987. Scientific calculator software to compute complex eigenvalues on HP-41. Commonly used in control, electrical and mechanical engineering. Scientific calculator software to transform a complex matrix into row reduced echelon form on on HP-41. Also computes the inverse and the determinant of its left square part. Scientific calculator software for polynomial factorization on HP-41. A polynomial of degree n has always n roots. Up to 5th degree. Scientific calculator program to do complex arithmetic with adjustable branch cut on HP-41. You can adjust the branch cut in the complex plane. Scientific calculator software to introduces complex number operations on the HP-41. Easy to use. Runs in extended memory. Complex arithmetic formulas written in LaTex used in the HP-41 program. Includes everything from power to trigonometric functions.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607713580131531, "perplexity": 1666.6159247855837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00603.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2206.05906v1
math.AC (what is this?) # Title: A note on homogeneous rank 2 locally nilpotent derivations on $k[X,Y,Z]$ Abstract: In this article we have shown that for every prime number $p$, irreducible, homogeneous locally nilpotent derivations of rank 2 and degree $p-2$ are triangularizable. Also we have given the structure of the kernel of irreducible, non-triangularizable, homogeneous locally nilpotent derivations of rank 2 and degree $pq-2$, where $p,q$ are prime numbers. Further we have given a different proof of the freeness property of certain homogeneous locally nilpotent derivations of rank 2. Comments: arXiv admin note: substantial text overlap with arXiv:2202.12630 Subjects: Commutative Algebra (math.AC) MSC classes: 13N15, 13F20 Cite as: arXiv:2206.05906 [math.AC] (or arXiv:2206.05906v1 [math.AC] for this version) ## Submission history From: Parnashree Ghosh [view email] [v1] Mon, 13 Jun 2022 05:16:30 GMT (14kb) [v2] Tue, 18 Oct 2022 14:14:25 GMT (17kb) [v3] Tue, 29 Nov 2022 07:37:07 GMT (18kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5115066766738892, "perplexity": 3641.1572671443937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00159.warc.gz"}
https://www.gamedev.net/forums/topic/641901-code-review-dice-game-no-graphics/
# Code Review - Dice Game (No graphics) This topic is 1736 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts This is my first code review post. This is a text-based java game that was very simple to make. I am trying to keep my focus on learning the basics of programming, rather than messing with graphics. Please let me know what I can do better. Here is the PasteBin link: http://pastebin.com/6ABunV4H ##### Share on other sites Indifferent, Thank you so much for your review. Just by looking at my code, I knew that there were a ton of things that could have been done better. Most of the things you pointed out were the issues that I knew needed to be fixed, but wasn't sure how. I highly appreciate you taking the time to review it, and more importantly, that you were nice about it and offered me some solutions. The biggest thing that I would like to learn is coding etiquette and formatting. Do you happen to know of any websites or articles that offer any insight into this? ##### Share on other sites Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. ;) Thank you for this ##### Share on other sites Indifferent, Thank you so much for your review. Just by looking at my code, I knew that there were a ton of things that could have been done better. Most of the things you pointed out were the issues that I knew needed to be fixed, but wasn't sure how. I highly appreciate you taking the time to review it, and more importantly, that you were nice about it and offered me some solutions. The biggest thing that I would like to learn is coding etiquette and formatting. Do you happen to know of any websites or articles that offer any insight into this? There are quite a few books that you'll see recommended time and time again. Code Complete seems to be a seminal book that might interest you, although I haven't read it myself so I can't vouch for it. I personally just tend to read articles that I stumble across, search for recommendations when I'm unsure whether I'm going about something in the right way (almost every question has been asked and answered a dozen times already) or browse forums like this one. ##### Share on other sites I agree with most of the things Indifferent says, but I want to add a few small things. All methods are static. Basically this means there is no Object Oriented programming, which makes it harder to break things apart. I think it would be a good idea to try and find a book that discusses Java and OO. Logic and UI are not separated. This is along the lines with Indifferents "Class length", but with a different angle.. When you separate the logic from the actual UI output it will be easier later on to try and add a Graphical layer, besides that it will be easier to test your logic using unit tests. In the rules() method you have multiple calls to System.out.println(), and although it is probably a matter of taste, I would combine the complete string and print it all in one println(). Btw. for method lengths a good rule of thumb is methods between 3 and 7 lines, especially when the logic is complex, you want less lines. ##### Share on other sites Thanks, both of you, for the advice on using classes. I'm currently reading through a book titled "Java Programming: From the Ground Up", and have just gotten to the basics of OOP. In fact, I think the chapter I'm about to start is about making classes and implementing them correctly. Btw. for method lengths a good rule of thumb is methods between 3 and 7 lines, especially when the logic is complex, you want less lines. I'm always afraid that I'm going to have too many methods in my program, so would I clear this up by making custom classes, and then just creating an object that can reference them? Code Complete seems to be a seminal book that might interest you, although I haven't read it myself so I can't vouch for it. Thanks for this! I'll look it up on Amazon and give it a whirl. Edited by litta_gator ##### Share on other sites Btw. for method lengths a good rule of thumb is methods between 3 and 7 lines, especially when the logic is complex, you want less lines. I'm always afraid that I'm going to have too many methods in my program, so would I clear this up by making custom classes, and then just creating an object that can reference them? I don't think you can have too many methods in a program. As is said before, your class should have one responsibility, but your methods shoud also have just one responsibility. Besides that, not all methods have to be public. That being said, sometimes there are reasons to make a method larger than usually. This could be for readability or performance. I can recommend the articles, videos and book of Robert C. Martin about "Clean Code". It is a relative old book, but still very usefull to read. ##### Share on other sites I can recommend the articles, videos and book of Robert C. Martin about "Clean Code". It is a relative old book, but still very usefull to read. Thank you so much for this. I know that this code isn't very elaborate, nor is the game. But, I really appreciate the objectiveness you have shown me. I know that sometimes, newbies (like me) are easy to tear apart, since their code is messy and wrong. You guys have really helped me to feel welcome to GameDevs. Thank you for all of your advice. ##### Share on other sites Btw. for method lengths a good rule of thumb is methods between 3 and 7 lines, especially when the logic is complex, you want less lines. I'm always afraid that I'm going to have too many methods in my program, so would I clear this up by making custom classes, and then just creating an object that can reference them? I don't think you can have too many methods in a program. As is said before, your class should have one responsibility, but your methods shoud also have just one responsibility. Besides that, not all methods have to be public. That being said, sometimes there are reasons to make a method larger than usually. This could be for readability or performance. I can recommend the articles, videos and book of Robert C. Martin about "Clean Code". It is a relative old book, but still very usefull to read. This might help... you don't always have to think of methods as performing some vital function... sometimes you can have a private method whose sole purpose is to simplify the code in a more complicated public method.  I often write little private methods that do various bits of non-trivial math operations, just to simplify the code. ##### Share on other sites This might help... you don't always have to think of methods as performing some vital function... sometimes you can have a private method whose sole purpose is to simplify the code in a more complicated public method. I often write little private methods that do various bits of non-trivial math operations, just to simplify the code. Agreed, sometimes I make a method just for the condition of an if statement, especially when it is more complex or used multiple times. ##### Share on other sites Agreed, sometimes I make a method just for the condition of an if statement, especially when it is more complex or used multiple times. I'm starting to realize this. I'm big about "clearing" the screen whenever changing player turns. I realized that it was a lot simpler to write a small method named clearScreen(), rather than type out: for(int i=0;i<10;i++) { System.out.println(); } every single time that I wanted to clear the screen. ##### Share on other sites I know that using global variables is usually frowned upon, but is it okay to use them if I believe that I can do so efficiently? Edited by litta_gator ##### Share on other sites I know that using global variables is usually frowned upon, but is it okay to use them if I believe that I can do so efficiently? There are no hard and fast rules in programming, even when it comes to some of the more controversial constructs. Although a global or two likely won't hurt, you should get into the habit of doing what you can to avoid needing them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1693333089351654, "perplexity": 579.0838521576953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00367.warc.gz"}
http://www.act.elektro.dtu.dk/Research/Research_fields/Structure-borne-Sound
Structure-borne Sound General description Vibration of mechanical systems and waves in solid structures in the audible frequency rang are subjects which form an integral part of engineering acoustics. The study of the phenomena of such vibrations and waves are called structure-borne sound, structural acoustics or vibro-acoustics; the three terms can be considered equivalent and interchangeable. Thus, vibro-acoustic is the study of the mechanical waves in structures and how they interact with and radiate into adjacent media. Although sound waves in structures cannot be heard directly, and only be felt at low frequencies, they play an important role in noise control. Many sound signals are generated or transmitted in structures before they are radiated into the surrounding medium. Examples are musical sound from a string instrument, noise from machines such as pumps in a central heating system or transport vehicles, (unwanted) sound radiation from the cabinet of loudspeakers, or sound transmission and structure-borne noise in buildings, etc. A fundamental knowledge of structural sound waves and their propagation is necessary for understanding vibro-acoustics. In many ways sound waves in structures and in fluids (gases or liquids) are similar. There are, however, also fundamental differences, which are due to the fact that solids have shear stiffness, whereas gases or liquids show practically none (except for viscosity effects). As a consequence acoustic energy can be transported not only by compressional (longitudinal) waves but also by shear waves and many combinations of compressional and shear waves. For noise control purposes, bending (or flexural or transvers) waves are of primary importance. Bending waves are more complicated than compressional or shear waves and depend not only on material properties but also on geometric properties. Due to this, they are dispersive, which means that the waves travel at different speeds for its different frequency components. When a vibrating structure is in contact with a fluid, the normal particle velocities at the interface must be equal in the two media. This causes some of the energy from the structure to escape into the fluid; some of it radiates away as sound in the far field and some of which stays near the structure as an evanescent near field. Most sound radiation is caused by bending waves, which have most of its motion in the transverse direction. The finite element method (FEM) can be used to predict the vibration of complex structures. A finite element computer program will assemble the mass, stiffness, and damping matrices based on geometrical and material properties.  The vibration response is then solved based on the excitations applied. The finite element method is deterministic and mainly applicable in the low frequency range (small Helmholtz numbers). Therefore, an exact analysis of large vibro-acoustic systems and complicated structures can be very difficult and time-consuming. Furthermore, when solutions are sought after in the full audible frequency range, then it will nearly always be necessary to use approximate computational methods. The excitation often is broadband, which means that many natural modes will be excited simultaneously, and often these modes overlap. In addition, the very modelling is complicated by the fact that boundary conditions and the exact material properties rarely are sufficiently well known in practice. In order to remedy this problem a strongly simplified method for predicting mean-value responses and sound radiation in connection with complex vibro-acoustic problems has been developed. This method is called statistical energy analysis (SEA), and has its origin in statistical room acoustic and in statistical mechanics. Research Focus At ACT and at CAMM, vibro-acoustic research is conducted in the fields mentioned above.  PhD project related to vibro-acoustics are concerned with FE modeling of vocal folds and hearing aids, seismic inversion techniques, miniature loudspeaker modeling and FE modeling of orthotropic plates. Other research activities have been conducted on cross-coupling and source description of vibro-acoustic sources, experimental and theoretical studies of rib-stiffened plates, and radiation and sound transmission of finite plates,  to name a few recent studies. Contact Jonas Brunskog Associate Professor DTU Electrical Engineering +45 45 25 39 35 http://www.act.elektro.dtu.dk/Research/Research_fields/Structure-borne-Sound 7 APRIL 2020
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139021992683411, "perplexity": 1031.9587580502864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00239.warc.gz"}
https://repository.tudelft.nl/islandora/search/author%3A%22Mulder%2C%20W.A.%22?page=4&collection=research
Searched for: author%3A%22Mulder%2C+W.A.%22 (81 - 100 of 101) ## Pages document Zhebel, E. (author), Minisini, S. (author), Kononov, A. (author), Mulder, W.A. (author) The finite-difference method is widely used for time-domain modelling of the wave equation because of its ease of implementation of high-order spatial discretization schemes, parallelization and computational efficiency. However, finite elements on tetrahedral meshes are more accurate in complex geometries near sharp interfaces. We compared the... conference paper 2012 document Zhebel, E. (author), Minisini, S. (author), Mulder, W.A. (author) We solve the three-dimensional acoustic wave equation, discretized on tetrahedral meshes. Two methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method (SIP-DG). Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and... conference paper 2012 document Kazei, V.V. (author), Ponomarenko, A.V. (author), Troyan, V.N. (author), Kashtan, B.M. (author), Mulder, W.A. (author) Full waveform inversion suffers from local minima, due to a lack of low frequencies in the data. A reflector below the zone of interest may, however, help in recovering the long-wavelength components of a velocity perturbation, as demonstrated in a paper by Mora. With the Born approximation for the perturbation in a reference model consisting of... conference paper 2012 document Kononov, A. (author), Minisini, S. (author), Zhebel, E. (author), Mulder, W.A. (author) Finite-element modelling of seismic wave propagation on tetrahedra requires meshes that accurately follow interfaces between impedance contrasts or surface topography and have element sizes proportional to the local velocity. We explain a mesh generation approach by example. Starting from a finite-difference representation of the velocity model,... conference paper 2012 document Wirianto, M. (author), Mulder, W.A. (author), Slob, E.C. (author) In the application of controlled source electromagnetics for reservoir monitoring on land, the timelapse signal measured with a surface-to-surface acquisition can reveal the lateral extent on the surface of resistivity changes at depth in a hydrocarbon reservoir under production. However, a direct interpretation of the time-lapse signal may... conference paper 2012 document Kavian, M. (author), Slob, E.C. (author), Mulder, W.A. (author) Macroscopic measurements of electrical resistivity require frequency-dependent effective models that honor the microscopic effects observable in macroscopic measurements. Effective models based on microscopic physics exist alongside with empirical models. We adopted an empirical model approach to modify an existing physical model. This provided... journal article 2012 document Kavian, M. (author), Slob, E.C. (author), Mulder, W.A. (author) We measured the electric parameters for four different configurations of unconsolidated homogeneous and layered sands as a function of frequency, water saturation, and salinity under fluid flow conditions. Our objective is to determine if the effect of heterogeneities at scales much smaller than the skin depth can be captured by introducing... journal article 2011 document Wirianto, M. (author), Mulder, W.A. (author), Slob, E.C. (author) In the application of controlled source electromagnetics for reservoir monitoring on land, repeatability errors in the source will mask the time-lapse signal due to hydrocarbon production when recording surface data close to the source. We demonstrate that at larger distances, the airwave will still provide sufficient illumination of the target.... journal article 2011 document Slob, E.C. (author), Hunziker, J.W. (author), Mulder, W.A. (author) journal article 2010 document Wirianto, M. (author), Mulder, W.A. (author), Slob, E.C. (author) journal article 2010 document Hak, B. (author), Mulder, W.A. (author) Seismic data enable imaging of the Earth, not only of velocity and density but also of attenuation contrasts. Unfortunately, the Born approximation of the constant-density visco-acoustic wave equation, which can serve as a forward modelling operator related to seismic migration, exhibits an ambiguity when attenuation is included. Different... journal article 2010 document Van Leeuwen, T. (author), Mulder, W.A. (author) Wave-equation traveltime tomography tries to obtain a subsurface velocity model from seismic data, either passive or active, that explains their traveltimes. A key step is the extraction of traveltime differences, or relative phase shifts, between observed and modelled finite-frequency waveforms. A standard approach involves a correlation of the... journal article 2010 document Van Leeuwen, T. (author), Mulder, W.A. (author) In seismic imaging, one tries to infer the medium properties of the subsurface from seismic reflection data. These data are the result of an active source experiment, where an explosive source and an array of receivers are placed at the surface. Due to the absence of low frequencies in the data, the corresponding inverse problem is strongly non... journal article 2009 document Mulder, W.A. (author), Hak, B. (author) journal article 2009 document Plessix, R.E. (author), Mulder, W.A. (author) We discuss some computational aspects of resistivity imaging by inversion of offshore controlled-source electromagnetic data. We adopt the classic approach to imaging by formulating it as an inverse problem. A weighted least-squares functional measures the misfit between synthetic and observed data. Its minimization by a quasi-Newton algorithm... journal article 2008 document Mulder, W.A. (author) The performance of a multigrid solver for time-harmonic electromagnetic problems in geophysical settings was investigated. With the low frequencies used in geophysical surveys for deeper targets, the light-speed waves in the earth can be neglected. Diffusion of induced currents is the dominant physical effect. The governing equations were... journal article 2008 document Mulder, W.A. (author), Wirianto, M. (author), Slob, E.C. (author) We modeled time-domain EM measurements of induction currents for marine and land applications with a frequency-domain code. An analysis of the computational complexity of a number of numerical methods shows that frequency-domain modeling followed by a Fourier transform is an attractive choice if a sufficiently powerful solver is available. A... journal article 2007 document Jönsthövel, T.B. (author), Oosterlee, C.W. (author), Mulder, W.A. (author) We evaluated multigrid techniques for 3D diffusive electromagnetism. The Maxwell equations and Ohm's law were discretised on stretched grids, with stretching in all coordinate directions. We compared standard multigrid to alternative multigrid approaches with linewise smoothing and semi-coarsening, both as a stand-alone solver and as a... conference paper 2006 document Jönsthövel, T.B. (author), Oosterlee, C.W. (author), Mulder, W.A. (author) We evaluated multigrid techniques for 3D diffusive electromagnetism. The Maxwell equations and Ohm's law were discretised on stretched grids, with stretching in all coordinate directions. We compared standard multigrid to alternative multigrid approaches with linewise smoothing and semi-coarsening, both as a stand-alone solver and as a... conference paper 2006 document Riyanti, C.D. (author), Erlangga, Y.A. (author), Plessix, R.E. (author), Mulder, W.A. (author), Vuik, C. (author), Oosterlee, C. (author) The time-harmonic wave equation, also known as the Helmholtz equation, is obtained if the constant-density acoustic wave equation is transformed from the time domain to the frequency domain. Its discretization results in a large, sparse, linear system of equations. In two dimensions, this system can be solved efficiently by a direct method. In... journal article 2006 Searched for: author%3A%22Mulder%2C+W.A.%22 (81 - 100 of 101)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822033703327179, "perplexity": 6492.618596572535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00777.warc.gz"}
https://uvebtech.com/articles/2021/eb-operation-101/
# EB Operation 101 While electron beams are complex machines to build, the day-to-day operation of an EB is quite straightforward. Despite the wide array of possible applications, there are only a few settings to adjust: dose, accelerating voltage and line speed. That’s it! Now, understanding how these parameters and the throughput of the beam apply to achieving a well-cured coating or a sterilized surface might take a few additional paragraphs … Dose is defined as the energy absorbed per unit mass. Common units for dose are kilogray (kGy) and megarad (Mrad), where 10 kGy = 1 Mrad = 10 kJ/kg. The term dose sometimes is used in the UV vernacular as a synonym for exposure or effective energy density; however, it is important to note that this usage is not in agreement with the EB, or more broadly, the ionizing radiation definition of dose.1 In machine parameters, dose (D, kGy) can be defined as where I is the beam current (mA), K is an efficiency factor, S is the line speed (mpm) and W is the beam width in vacuum (m).2 For the EB operator, though, dose simply is the setting that determines material effects. Too little dose and that wet, EB-curable coating won’t fully polymerize, and it will emerge from the beam still wet. Too much dose and that same coating will emerge dry but crackled like dried mud – the additional energy causing so many cross-links the coating becomes brittle. Similarly, the dose can be adjusted to add just the right amount of heat resistance to polyethylene or to achieve a log-10 reduction in a particular microbe. Typical dose ranges for common applications include 25 to 50 kGy for the curing of inks and coatings, 50 to 150 kGy for cross-linking plastics and 10 to 35 kGy for sterilization of products. Once dose and line speed settings are selected, the EB operating system does all the hard work. Using Equation 1, it calculates the required beam current, and it even ramps the beam current along with the line speed to make sure the product receives a consistent dose during start-up and shutdown of the line. Where dose determines the effect on the irradiated material, accelerating voltage (kV) determines how deeply into the material the dose is delivered. The cross-linking of polyethylene (PE) is a great example of how voltage selection can be used to achieve different results for different applications. For instance, in the case of shrink films, uniform dose distribution is desired to achieve uniform material properties throughout the film. The desire in heat seal applications, in contrast, is to only cross-link the outer layer of PE. The inner layer of the film must obtain little to no dose in order to prevent any increase in the melt temperature, ensuring a good heat seal. The needs of each application can be met by merely adjusting the accelerating voltage. It should be emphasized that dose and accelerating voltage are independent dials. A dose of 30 kGy can be delivered at 80 kV or 300 kV or any voltage in between, as can a dose of 50 kGy, 100 kGy or any dose achievable by the EB. This independence provides great flexibility in application development. The voltage required to penetrate a material is dependent on two factors – the material density and the material stopping power. Unlike UV, the penetration of accelerated electrons is not affected by the optical clarity of the material. Monte Carlo simulations are used to model how the electrons will interact with a material and to create depth-dose curves (Figure 1). Relative dose is shown on the vertical axis, where 1 represents the total surface dose. The horizontal axis consists of the depth of penetration into the material. The units are a bit odd for a depth – g/m2 – but it is written as such to be applicable for use with different material densities. If the density of the material is 1 g/cm3, the units of depth become micrometers (µm). The example illustrated on the graph with dotted lines would read as follows for a material with a density of 1 g/cm3: 80% of the surface dose reaches a depth of 50 µm with a voltage setting of 125 kV. If the material density were 2 g/cm3, the depth would simply be revised to 25 µm. Note: Monte Carlo simulations require information about the machine design, and thus the depth-dose curves generated are not universal. In addition, a representative material is usually chosen for modeling, and the curves should only be used to estimate voltage settings for similar materials – plastic for plastics, metal for metal, etc. – to avoid excessive error. Finally, throughput. This variable is not an operator input but instead is tied to machine capacity. A throughput tells the operator what maximum dose can be achieved at a particular speed. The units of throughput are kGy*mpm or Mrad*fpm. With a throughput of 10,000 kGy*mpm, for example, the maximum dose at 100 mpm would be 100 kGy, at 50 mpm the maximum dose would be 200 kGy, and so forth. This relationship stems from Equation 1, which can be used to relate the maximum current rating of an EB power supply to dose and speed. Additionally, the throughput does vary with the accelerating voltage, because the efficiency factor, K, changes with voltage. Crash course complete! Sage Schissel, Ph.D. Applications Specialist PCT Ebeam and Integration LLC sage.schissel@pctebi.com ### References 1. Terminology Used for Ultraviolet (UV) Curing Process Design and Measurement. https://www.radtech.org/intro-to-uv-eb/uv-glossary 2. Davidson, R. S., Mechanism of Electron-Beam Curing in Radiation Curing in Polymer Science and Technology. Fouassier, J. P., Rabek, J. F., eds. Vol.3. 1993.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8624774217605591, "perplexity": 1646.9482437250251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00443.warc.gz"}
http://www.core-econ.org/espp/book/text/06.html
At work Unit 6 The firm: Employees, managers, and owners Introduction • The firm is an actor in the capitalist economy, and also a stage on which interactions are played out among the firm’s employees, managers, and owners. • Hiring workers is different from buying other goods and services. The contract between the two parties does not cover many things that the employer really cares about, including how hard and how well the employee will work, and how long she will stay with the firm. • Firms do not pay the lowest wages possible. They set wages so that employees experience a cost if they lose their jobs. This motivates them to work effectively and stay with the firm, and it ensures that firms always have a large pool of job applicants. • We explain why working together in firms brings mutual gains for owners, managers, and employees. We will also explain why, because employees have something to lose if they lose their jobs, there will always be unemployed people in the economy. • Working in the gig economy or in a worker-owned cooperative is different from being an employee in a capitalist firm. In March 2000, Terri Lawrence was driving her 1996 Ford Explorer SUV near Fort Lauderdale, Florida. ‘All of a sudden, she said, ‘there was this explosion’. One of her tyres had blown out. The Ford Explorer flipped over, and she was badly injured. By the summer of 2000, with similar reports of blowouts and overturned Explorers accumulating, Ford convened a ‘war room’ to deal with the public relations catastrophe. They quickly determined that the Firestone tyres used on most Explorers were at fault. There were no unusual reports of blowouts on Explorers with Goodyear tyres. In August 2000, in partnership with Ford, Firestone recalled 14.4 million tyres. According to the US National Highway Traffic and Safety Administration, blowouts of the Firestone tyres in question had resulted in crashes that took 271 lives. In the four months following the recall, the market value of Firestone shares on the stock exchange dropped by $9.2 billion, to less than half of their value before the crisis. But the cause of the spate of fatal blowouts remained a mystery. High-speed stress tests confirmed that there was nothing wrong with the design of the tyres. An inconspicuous clue, however, pointed to ‘the scene of the crime’, if not its motive. On the sidewall of each of the blown-out tyres was a ten-digit tyre code, indicating the particular plant that had produced the tyre and the week of its production. Most of the faulty tyres had been produced at just one of Firestone’s six plants, located in Decatur, Illinois. For years, the Firestone Decatur plant had been in the news for other reasons. In 1994, the company had imposed a 12-hour shift that rotated between night and day for each worker, replacing the historic 8-hour shift. New hires’ wages were reduced by 30%; vacations for more senior workers were cut by two weeks. On 12 July 1994, the United Rubber Workers union that represented the employees called a strike. The firm immediately hired 2,300 replacement workers, paying them 30% less than wages previously paid. Ten months later, the union called off the strike; returning workers accepted substantial pay cuts, a freeze in their pension benefits, and the 12-hour shifts. Bitterness toward the company and protests continued. Building tyres at the time was a labour-intensive and skilled occupation. A number of the union workers blamed lack of training and experience among the replacement workers for the tyre blowouts. William Newton, a senior tyre builder in Decatur, reported that he ‘saw a lot of people [working as replacements] who did not know how to build tyres’. But investigators looking at the detailed records of exactly when the faulty tyres were produced were in for a shock—virtually none of them had been produced during the strike, when Firestone was employing the replacement workers. Most of the faulty tyres had been produced by experienced union workers, both before the strike—when Firestone’s pay cuts, 12-hour shift, and other new demands had been announced—and after the defeat of the strike, when the union workers returned. While it cannot be proven, it seems likely that the permanent workers at the Decatur plant had retaliated against Firestone, devoting less effort to producing safe tyres, or even deliberately sabotaging production. The owners of Firestone discovered that they could indeed impose a 12-hour shift and a 30% pay cut, but they could not ensure that safe tyres would be produced if their employees were angry as a result. Firestone and Ford are business organizations called firms. Not everyone is employed in a firm. For example, many farmers, carpenters, software developers, and personal trainers work independently, as neither employee nor employer. While some people work for governments and not-for-profit organizations, the majority of people in high-income countries make their living by working in a firm. Firms are major actors in the economy; we will use this and the next unit to explain how they work. A firm is often referred to as if it were a person—we talk about ‘the price Firestone charges’. But, while firms are actors—and in some legal systems are treated as if they were individuals—firms are also the stages on which the people who make up the firm act out their sometimes common, but sometimes competing, interests. The people making up the firm—owners, managers, and employees—are united in their common interest in the firm’s success because all of them would suffer if it were to fail. However, they have conflicting interests about how to distribute the profits from the firm’s success among themselves (wages, managerial salaries, and owners’ profits); they may also disagree about policies (such as conditions of work and managerial perks) and who makes the key decisions (such as whether it was a good idea to impose a 12-hour shift on the Firestone workers in Decatur and cut their vacation times). To understand the firm, we will model how employers set wages and how employees respond. We have already seen, in earlier units, the importance of work and firms in the economy: • Work is how people produce their livelihoods. In deciding how much time to spend working, people face a trade-off between free time and the goods that they can produce or the wage income that they can earn, as we saw in Unit 4. • We also know that there are potential gains (for all concerned) from individuals specializing in tasks for which they have a comparative advantage through the division of labour. • The division of labour may be coordinated through market exchange, as in the invisible hand game in Unit 2. In Unit 5, the interaction between Angela and Bruno was coordinated by a contract that traded the use of land for a share of the crop. • Another way that work may be coordinated and combined with other inputs is by organization within a firm. In this unit, we study how the coordination of labour takes place within firms in the modern capitalist economy. We model how wages are determined when there are conflicts of interest between employers and employees, and look at what this means for the sharing of the mutual gains that arise in a firm. In Unit 7, we will look at the firm as an actor in its relationship with other firms and with its customers. division of labour The specialization of producers to carry out different tasks in the production process. Also known as: specialization. firm A business organization which pays wages and salaries to employ people, and purchases inputs, to produce and market goods and services with the intention of making a profit. 6.1 Firms, markets, and the division of labour The economy is made up of people doing different things, for example producing Apple iPhones or making clothing for export. Producing smartphones involves many distinct tasks, done by different employees within the companies that make components for Apple—Toshiba or Sharp in Japan, or Infineon in Germany. Setting aside the work done in families, in a capitalist economy, the division of labour is coordinated in two major ways—firms and markets. • Through firms, the components of goods are produced by different people in different departments of the firm and assembled to produce a finished shirt or iPhone. • Components produced by groups of workers in different firms may also be brought together through market interactions between firms. • By buying and selling goods on markets, the finished iPhone gets from the producer into the pocket of the consumer. Among the institutions of modern capitalist economies, the firm rivals the government in importance. John Micklethwait and Adrian Wooldridge explain how this happened. John Micklethwait and Adrian Wooldridge. 2003. The Company: A Short History of a Revolutionary Idea. New York, NY: Modern Library. Why do firms work the way they do? For example, why do the owners of the firm hire the workers, rather than the other way around? Randall Kroszner and Louis Putterman summarize this field of economics. Randall S. Kroszner and Louis Putterman (editors). 2009. The Economic Nature of the Firm: A Reader. Cambridge: Cambridge University Press. In this unit, we study firms. In the units to follow, we study markets. Herbert Simon, an economist, used the view from Mars to explain why it is important to study both. Great economists Herbert Simon Trained as a political scientist, Simon’s desire to understand society led him to study both institutions and the human mind—to open the ‘black box’ of motivations that economists had come to take for granted. Herbert ‘Herb’ Simon (1916–2001) was celebrated in the disciplines of computer science, psychology, and, of course, economics, for which he won the Nobel Prize in 1978. Imagine a visitor approaching Earth from Mars, Simon urged his readers. Looking at Earth through a telescope that revealed social structure, what would our visitor see? Companies might appear as green fields, he suggested, divisions and departments as faint contours within. Connecting these fields, red lines of buying and selling. Within these fields, blue lines of authority, connecting boss and employee, foreman and assembly worker, mentor and mentee. Traditionally, economists had focused on the market and the competitive setting of prices. But to a visitor from Mars, Simon suggested: Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of ‘large green areas interconnected by red lines.’ It would not likely speak of ‘a network of red lines connecting green spots’.1 A firm, he pointed out, is not simply an agent, shifting to match supply and demand. It is composed of individuals, whose needs and desires might conflict. Simon asked: In what ways could these differences be resolved? When would an individual shift from contract work (a ‘sale’ of a particular, predefined task) to an employment relation? An employment relation where a boss dictates the task after the sale is the relationship at the heart of a firm. When the desired task is easy to specify in a contract, Simon explained that we could view this as simply work-for-hire. But high uncertainty (the employer not knowing in advance what needs to be done) would make it impossible to specify in a contract what the worker was to do and, in this case, the result would be an employer–employee relation that is characteristic of the firm.2 This early work showcased two of Simon’s lasting interests—the complexity of economic relations, where one might sell an obligation that was incompletely described, and the role of uncertainty in changing the nature of decision making. His argument demonstrated the emergence of the ‘boss’. Understanding how contract work turns into employment helps us understand a particular relationship between two members of an organization. We have yet to explain the firm as a whole—the Martian’s green fields. For Simon, the study of markets needed to be supplemented—even supplanted—by institutions and governments better equipped to handle uncertainty and rapid change. These alternative ‘authority mechanisms’ draw on partially understood aspects of the human psyche: loyalty, group identification, and creative satisfaction. By the time of his death in 2001, Simon had seen many of his ideas reach the mainstream. Behavioural economics has roots in his attempts to build economic theories that reflect empirical data. Simon’s view from Mars shows that economics could not be a self-contained science; an economist needs to be both a mathematician—working with decision sets and utilities—and a social psychologist—reasoning about the motivations of human relationships. The coordination of work The way that labour is coordinated within firms is different to coordination through markets: • Firms represent a concentration of economic power: This is placed in the hands of the owners and managers, who regularly issue directives with the expectation that their employees will carry them out. An ‘order’ in the firm is a command. • Markets are characterized by a decentralization of power: Purchases and sales result from the buyers’ and sellers’ autonomous decisions. An ‘order’ in a market is a request for a purchase that can be rejected if the seller pleases. The prices that motivate and constrain people’s actions in a market are the result of the actions of thousands or millions of individuals, not a decision by someone in authority. Although the government can tax and regulate private property, the idea of private property specifically limits the things a government or anyone else can do with your possessions. These two books describe the property rights, authority structures, and market interactions that characterize the modern capitalist firm. • Henry Hansmann. 2000. The Ownership of Enterprise. Cambridge, MA: Belknap Press. • Oliver E. Williamson. 1985. The Economic Institutions of Capitalism. New York, NY: Collier Macmillan. In a firm, by contrast, owners or their managers direct the activities of their employees, who may number in the thousands or even millions. The managers of Walmart, the world’s largest retailer, decide on the activities of 2.2 million employees, a larger number of people than any army in world history before the nineteenth century. Walmart is an exceptionally large firm, but it is not exceptional in that it brings together a large number of people who work in a way coordinated (by the management) to make profits. Like any organization, firms have a decision-making process and ways of imposing their decisions on the people in it. When we say that ‘Fiat outsourced its component production’ or ‘the firm sets a price of €11,200’, we mean that the decision-making process in the firm resulted in these actions. Figure 6.1 shows a simplified picture of the firm’s actors and decision-making structure. The firm’s actors and its decision-making and information structures. Figure 6.1 The firm’s actors and its decision-making and information structures. Owners decide long-term strategies The owners, through their board of directors, decide the long-term strategies of the firm concerning how, what, and where to produce. They then direct the manager(s) to implement these decisions. Figure 6.1a The owners, through their board of directors, decide the long-term strategies of the firm concerning how, what, and where to produce. They then direct the manager(s) to implement these decisions. Managers assign workers Each manager assigns workers to the tasks required for these decisions to be implemented and attempts to ensure that the assignments are carried out. Figure 6.1b Each manager assigns workers to the tasks required for these decisions to be implemented and attempts to ensure that the assignments are carried out. Flows of information The green arrows represent flows of information. The upward green arrows are dashed lines because workers often know things that managers do not, and managers often know things that owners do not. Figure 6.1c The green arrows represent flows of information. The upward green arrows are dashed lines because workers often know things that managers do not, and managers often know things that owners do not. asymmetric information Information that is relevant to the parties in an economic interaction, but is known by some but not by others. See also: adverse selection, moral hazard. The dashed upward green arrows represent a problem of asymmetric information between levels in the firm’s hierarchy (owners and managers, managers and workers). Since owners or managers do not always know what their subordinates know or do, not all of their directions or commands (grey downward arrows) are necessarily carried out. This relationship between the firm and its employees contrasts with the firm’s relationship with its customers, which we study in the next unit. The bakery firm cannot text its customers to tell them to ‘Show up at 8 a.m. and purchase two loaves of bread at the price of €1 each’. The firm could tempt its customers with a special offer, but unlike the relationship with its employees, it cannot require them to show up. When you buy or sell something, it is generally voluntary. In buying or selling, you respond to prices, not orders. The firm is different; it is defined by having a decision-making structure in which some people have power over others. 6.2 Power relations within the firm Karl Marx, a philosopher and political theorist in the nineteenth century, was also interested in the power relations in a firm. He concluded that conflict between employers and workers was inevitable. Buying and selling goods in an open market is a transaction between equals—nobody is in a position to order anyone else to buy or sell. In the labour market, in which owners of capital are buyers and workers are the sellers, the appearance of freedom and equality was, to Marx, an illusion. He reasoned that employers did not buy the employees’ work, because this cannot be purchased. Instead, the wage allowed the employer to rent the worker and to command workers inside the firm. Workers were not inclined to disobey because they might lose their jobs and join the ‘reserve army’ of the unemployed (the phrase that Marx used in his 1867 work, Capital). Marx thought that the power wielded by employers over workers was a core defect of capitalism. Capital, Marx’s most famous work, is long and covers many subjects, but you can use a searchable archive to find the passages you need. Karl Marx. (1867) 1906. Capital: A Critique of Political Economy. New York, NY: Random House. Great economists Karl Marx Adam Smith, writing at the birth of capitalism in the eighteenth century, was to become its most famous advocate. Karl Marx (1818–1883), who watched capitalism mature in the industrial towns of England, was to become its most famous critic. Born in Prussia (now part of Germany), he distinguished himself only by his rebelliousness as a student at a Jesuit high school. In 1842, he became a writer and editor for the Rheinische Zeitung, a liberal newspaper, which was later closed by the government. After this, he moved to Paris and met Friedrich Engels, with whom he collaborated in writing The Communist Manifesto (1848). Marx then moved to London in 1849. At first, Marx and his wife Jenny lived in poverty. He earned money by writing about political events in Europe for the New York Tribune. Marx saw capitalism as just the latest in a succession of economic arrangements in which people have lived since prehistory. Inequality was not unique to capitalism, he observed—slavery, feudalism, and most other economic systems had shared this feature—but capitalism also generated perpetual change and growth in output.3 He was the first economist to understand why the capitalist economy was the most dynamic in human history. Perpetual change arose, Marx observed, because capitalists could survive only by introducing new technologies and products, finding ways of lowering costs, and by reinvesting their profits into businesses that would perpetually grow. Marx also had influential views on history, politics, and sociology. He thought that history was decisively shaped by the interactions between scarcity, technological progress, and economic institutions, and that political conflicts arose from conflicts about the distribution of income and the organization of these institutions. He thought that capitalism, by organizing production and allocation in anonymous markets, created atomized individuals instead of integrated communities. In recent years, economists have returned to themes in Marx’s work to help explain economic crises. These themes include the firm as an arena of conflict and of the exercise of power (this unit), the role of technological progress (Unit 1), and the problems created by inequality (Unit 5). The writer George Bernard Shaw (1856–1950) joked that ‘if all economists were laid end to end, they would not reach a conclusion.’ This is funny, but not entirely true. Read the ‘When economists agree’ box to see how Marx and Chicago economist, Ronald Coase—two economists from different centuries and political orientations—came up with similar ways of understanding the power relations between employers and their employees. When economists agree Coase and Marx on the firm and its employees In the nineteenth century, Marx had contrasted the way that buyers and sellers interact on a market, voluntarily engaging in trade, with how the firm is organized as a top–down structure, one in which employers issue orders and workers follow them. He called markets ‘a very Eden of the innate rights of man’, but described firms as ‘exploit[ing] labour-power to the greatest possible extent.’ When Ronald Coase died in 2013, he was described by Forbes magazine as ‘the greatest of the many great University of Chicago economists’. The motto of Forbes is ‘The capitalist tool’, and the University of Chicago has a reputation as the centre of conservative economic thinking. Yet, like Marx, Coase stressed the central role of authority in the firm’s contractual relations: Note the character of the contract into which an [employee] enters that is employed within a firm … for certain remuneration [the employee] agrees to obey the directions of the entrepreneur.4 Coase founded the study of the firm as both a stage and an actor. In The Nature of the Firm he wrote: If a workman moves from department Y to department X, he does not go because of a change in relative prices but because he is ordered to do so … the distinguishing mark of the firm is the suppression of the price mechanism.5 Coase sought to understand why firms exist at all, quoting his contemporary D. H. Robertson’s description of them as ‘islands of conscious power in this ocean of unconscious cooperation’.6 Coase pointed out that the firm in a capitalist economy is a miniature, privately owned, centrally planned economy. Its top–down decision-making structure resembles the centralized direction of production in entire economies that took place in many Communist countries (and in the US and the UK during the Second World War).7 Both Marx and Coase based their thinking on careful empirical observation, and they arrived at a similar understanding of the hierarchy of the firm. They disagreed, however, on the consequences of what they observed—Coase thought that the hierarchy of the firm was a cost-reducing way to do business; Marx thought that the coercive authority of the boss over the worker limited the employee’s freedom. Over this, they disagreed. But they also advanced economics with a common idea. How economists learn from data Managers exert power These three investigations, published as books, show the effect of the power that managers and owners exert. • In Nickel and Dimed: On (Not) Getting By in America, Barbara Ehrenreich worked undercover for minimum wage in motels and restaurants to see how America’s poor live.8 • In Hard Work: Life in Low-pay Britain, Polly Toynbee, a British journalist, had previously done the same in the UK in 2003, taking jobs such as call centre employee and home care worker.9 • In Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century, Harry Braverman and Paul Sweezy provide a history of what they call the ‘deskilling’ process, and suggest how dumbing down jobs is a strategy for maximizing the employer’s profits.10 Contracts and relationships contract A legal document or understanding that specifies a set of actions that parties to the contract must undertake. The difference between market interactions and relationships within firms is clear when we consider the differing kinds of written and unwritten contracts that form the basis of exchange. A sale contract for a car transfers ownership, meaning that the new owner can now use the car and exclude others from its use. A rental contract on an apartment does not transfer ownership of the apartment (which would include the right to sell it); instead it gives the tenant a limited set of rights over the apartment, including the right to exclude others (including the landlord) from its use. wage labour A system in which producers are paid for the time they work for their employers. Under a wage labour contract, an employee gives the employer the right to direct him or her to be at work at specific times, and to accept the authority of the employer over the use of his or her time while at work. The employer does not own the employee as a result of this contract. If the employer did, the employee would be called a slave. We might say that the employer has ‘rented’ the employee for part of the day. To summarize: • Contracts for products: When sold in markets, they permanently transfer ownership of the good from the seller to the buyer. • Contracts for labour: These contracts temporarily transfer authority over a person’s activities from the employee to the manager or owner. • A contract does not have to be written: It can be an understanding between the employer and the employee. Exercise 6.1 The structure of an organization In Figure 6.1, we showed the actors and decision-making structure of a typical firm. 1. How might the actors and decision-making structure of three organizations, Google, Wikipedia, and a family farm compare with this? 2. Draw an organizational structure chart in the style of Figure 6.1 to represent each of these entities. Question 6.1 Choose the correct answer(s) Which of the following statements are true? • A labour contract transfers ownership of the employee from the employee to the employer. • According to Herbert Simon, a visitor approaching Earth from Mars with a telescope that reveals social structure would see a network of red lines (market exchanges) connecting green spots (firms and consumers). • In a labour contract, one side of the contract has the power to issue orders to the other side, but this power is absent from a sale contract. • A firm is a structure that involves decentralization of power to the employees. • That would be slavery. A labour contract grants the firm the authority to direct the activities of the employee during specific times. • Herbert Simon reported the converse—highlighting the importance of the firms (green) and the lines of authority within them, rather than the exchange activities of buying and selling. • A labour contract gives the employer the authority to direct the activities of the employee, whereas a sale contract transfers property rights and does not bind the parties to further actions. • Firms represent a concentration of economic power in the hands of the owners and managers. 6.3 Other people’s money: The separation of ownership and control The firm’s profits legally belong to the people who own the firm’s assets, including its capital goods. The firm’s owners direct the other members of the firm to take actions that contribute to the firm’s profits. This, in turn, will increase the value of the firm’s assets and improve the owners’ wealth. There are thus two aspects of ownership of a firm: • The owners direct the activities of other participants in the firm: Usually through hiring managers. • The owners receive the firm’s profits: Namely, whatever remains after the revenues, which are the proceeds from sale of the products, is used to pay employees, managers, suppliers, creditors, and taxes. residual claimant The person who receives the income left over from a firm or other project after the payment of all contractual costs (for example the cost of hiring workers and paying taxes). Profit is the residual. It is what’s left of the revenues after these payments. The owners claim it, which is why they are called residual claimants. Managers and employees are not residual claimants (unless they have some share in the ownership of the firm). This division of revenue has an important implication. If the firm’s revenues increase because managers or employees do their jobs well, the owners will benefit, but the managers and employees will not (unless they receive a promotion, bonus, or salary increase). This is one reason we consider the firm as a stage, one on which not all the actors have the same interests. Owners delegate control to managers share A part of the assets of a firm that may be traded. It gives the holder a right to receive a proportion of a firm’s profit and to benefit when the firm’s assets become more valuable. Also known as: common stock. In large corporations, there are typically many owners. Most of them play no part in the firm’s management. The owners of the firm are the individuals and institutions (such as pension funds) that own the shares issued by the firm. By issuing shares to the general public, a company can raise capital to finance its growth, leaving strategic and operational decisions to a relatively small group of specialized managers. These decisions include what, where, and how to produce the firm’s output, or how much to pay employees and managers. The senior management of a firm is also responsible for deciding how much of the firm’s profits are distributed to shareholders in the form of dividends, and how much is retained to finance growth. The owners benefit from the firm’s growth because they own part of the value of the firm, which increases as the firm grows. separation of ownership and control The attribute of some firms by which managers are a separate group from the owners. When managers decide on the use of other people’s funds, this is referred to as the separation of ownership and control. The separation of ownership and control results in a potential conflict of interest. Conflict of interest between owners and managers The decisions of managers affect profits, and profits decide the incomes of the owners. But the interests of owners and managers will be in conflict because managers’ salaries and bonuses are paid from profits that would otherwise go to the owners. There are many things that managers can do to raise their pay at the expense of profits. For example, in firms listed on the stock market, managers’ pay rises and falls with the firm’s stock market performance over a period as short as a quarter or a year; there are many ways managers can boost the firm’s short-term stock market performance but damage the firm’s profitability in the long run. Managers are in day-to-day control of the firm’s assets and they may choose to take actions that benefit themselves, at the expense of the owners. An example is where managers seek to increase their own power and prestige through empire-building, even if that is not in the interests of shareholders. Even sole owners of firms are not required to maximize their profits. Restaurant owners can choose menus they personally like, or waiters who are their friends. But, unlike managers, when they lose profits as a result, the cost comes directly out of their own pockets. Although Adam Smith had not seen the modern firm, he observed the tendency of senior managers to serve their own interests rather than those of shareholders. He said this about the managers of what were then called joint-stock companies: [B]eing the managers rather of other people’s money than of their own, it cannot well be expected, that they should watch over it with the same anxious vigilance with which the partners in a [firm managed by its owners] frequently watch over their own … Negligence and profusion, therefore, must always prevail, more or less, in the management of the affairs of such a company. (The Wealth of Nations, 1776) Aligning the interests of owners and managers There are many ways that owners can incentivize managers to serve their interests. One is that they can structure contracts so that managerial compensation depends on the performance of the company’s share price over a lengthy period of time. Another is that the firm’s board of directors, which represents the firm’s shareholders and typically has a substantial share in the firm (like a representative of a pension fund), can monitor the managers’ performance. The board has the authority to dismiss managers. free ride Benefiting from the contributions of others to some cooperative project without contributing oneself. But although the shareholders, who are the ultimate owners, have the right to replace members of the board, they rarely do so. Shareholders are a large and diverse group that cannot easily coordinate to decide something. Occasionally, however, this free-rider problem is overcome and a shareholder with a large stake in a company may lead or coordinate a shareholder revolt to change or influence the board of directors and senior management. In spite of the separation of ownership and control, when we model the firm as an actor, we often assume that it maximizes profits. This is a simplification, but a reasonable one for many purposes: • Owners have a strong interest in profit maximization: It is the basis of their wealth. • Market competition tends to penalize or eliminate firms that do not make substantial profits for their owners: We saw this process in Unit 1 as part of the explanation of the permanent technological revolution, and it applies to all aspects of the firms’ decisions. Question 6.2 Choose the correct answer(s) Which of the following statements about the separation of ownership and control are true? • When the ownership and control of a firm is separated, the managers become the residual claimants. • Managers always work to maximize the firm’s profit. • One way to address the problem associated with the separation of ownership and control is to pay the managers a salary that depends on the performance of the firm’s share price. • It is effective for shareholders to monitor the performance of the management, in a firm owned by a large number of shareholders. • The shareholders are the residual claimants. • Managers may choose to take actions that provide benefits for themselves at the expense of the owners. • Such performance-related pay is a common method of incentivizing managers to maximize the value of their firm. • When there are many shareholders, there is not only a coordination problem but also a free-rider problem, where every shareholder relies on others to do the costly monitoring (and hence too little monitoring is undertaken). 6.4 Other people’s labour The firm does not only manage ‘other people’s money’; the decision-makers in a firm also decide on the uses to which their employees’ efforts will be put. People participate in firms because they can do better if they are part of the firm than if they were notfor example, if they were self-employed. As in all voluntary economic interactions, there are mutual gains. But just as conflicts arise between owners and managers, there will generally be differences between owners and managers on the one hand, and employees on the other, about how the firm will use the strength, creativity, and other skills of its employees. A firm’s profits (before the payment of taxes) depend on three things: • Costs of acquiring the inputs for the production process: Labour is one of these inputs. • Output: How much these inputs produce. • Sales revenues: The money the firm receives when it sells goods or services Our focus here is how firms seek to minimize the cost of acquiring the necessary labour to produce the goods and services they sell. The employment contract is incomplete Hiring employees is different from buying other goods and services. When we buy a shirt or pay someone to mow a lawn, it is clear what we get for our cash. If we don’t get it, we don’t pay; if we have already paid, we can go to court and get our money back. But a firm cannot write an enforceable employment contract that specifies the exact tasks employees have to perform in order to get paid. This is for three reasons: • When the firm writes a contract for the employment of a worker, it cannot know exactly what it will need the employee to do, because this will be determined by unforeseen future events. • It would be impractical or too costly for the firm to observe exactly how much effort each employee makes in doing the job. • Even if the firm somehow acquired this information, it could not be the basis of an enforceable contract. To understand the last point, consider a restaurant owner, who would like her staff to serve customers in a pleasant manner. Imagine how difficult it would be for a court to decide whether the owner can withhold wages from a waiter because he had not smiled often enough. incomplete contract A contract that does not specify, in an enforceable way, every aspect of the exchange that affects the interests of parties to the exchange (or of others). An employment contract omits things that both the employees and the business owner care about—how hard and how well the employee will work, and for how long the worker will stay. We call this an incomplete contract. As a result of this contractual incompleteness, paying the lowest possible wage is almost never the firm’s strategy to minimize the cost of acquiring the labour effort it needs. Exercise 6.2 Incomplete contracts Think of two or three jobs with which you are familiar, perhaps a teacher, a retail worker, a nurse, or a police officer. In each case, indicate why the employment contract is necessarily incomplete. What important parts of the person’s job—things that the employer would like to see the employee do or not do—cannot be covered in a contract, or if they are, cannot be enforced? Why not pay workers piece rates? piece-rate work A type of employment in which the worker is paid a fixed amount for each unit of the product made. Why is it not possible for firms just to pay employees according to how productive they are? For example, paying employees at a clothing factory$2 for each garment they finish. This method of payment, known as piece rate, provides the employee with an incentive to exert effort, because employees take home more pay if they make more garments. In the late nineteenth century, the pay of more than half of US manufacturing workers was based on their output, but piece rates are not widely used in modern economies. At the turn of the twenty-first century, fewer than 5% of manufacturing workers in the US were paid piece rates and, beyond the manufacturing sector, piece rates are used even less often.11 Why do most of today’s firms not use this simple method to induce high effort from their employees? • It is very difficult to measure the amount of output an employee is producing in modern knowledge- and service-based economies: Think about an office worker, or someone providing home care for an elderly person. • Employees rarely work alone: This means that measuring the contribution of individual workers is difficult, for example a team in a marketing company working on an advertising campaign, or the kitchen staff at a restaurant). gig economy An economy made up of people performing services matched by means of a computer platform with those paying for the service. Workers are paid for task performance and not per hour, are not legally recognized as employees of the company that owns the platform, and typically receive few if any benefits from the owners other than matching. An exception is today’s gig economy (see Section 6.12). Consider the case of Uber, Lyft, Deliveroo, and other delivery and transportation services. Unlike modern office and factory work, the job is typically done by a single individual working alone. The nature of the job is easily subject to contract because it is readily determined whether it has been carried out or not. If piece rates are not practical, then what other method could a firm use to induce high effort from workers? How could the firm provide an incentive to do the job well, even though the worker is paid for time and not output? Just as the owners of the firm protect their interests by linking management pay to the firm’s share price, the manager uses incentives so that employees will work effectively. Question 6.3 Choose the correct answer(s) Which of the following statements regarding employment contracts are correct? • Employment contracts are incomplete as they can only specify things that both the employees and the business owner care about. • The firm is required to state exactly what it needs the employee to do in an employment contract. • Employees’ effort levels cannot be the basis of an enforceable contract. • The firm needs to specify exactly how much effort employees are expected to put into their jobs. • Employment contracts are incomplete as they cannot specify things that both the employees and the business owner care about—how hard and how well the employee will work, and for how long the worker will stay. • Due to unforeseen future events, the firm cannot possibly know exactly what it will need the employee to do at the time the contract is signed. • This is true—for example, a restaurant owner cannot take an employee to a court to decide whether he can withhold wages because the waiter does not smile often enough. • It is impractical or too costly for the firm to observe exactly how much effort each employee puts into their job. Question 6.4 Choose the correct answer(s) Which of the following are reasons why employment contracts are incomplete? • The firm cannot contract an employee not to leave. • The firm cannot specify every eventuality in a contract. • The firm is unable to observe exactly how an employee is fulfilling the contract. • The contract is unfinished. • It may be costly for the firm if the employee leaves, but employees retain the right to do so. • Since the firm does not know all the tasks it will require of an employee, the contract is necessarily incomplete. • Since effort or the quality of an individual’s work cannot be perfectly monitored and measured, it cannot be specified in the contract. • Employment contracts are usually long term. An incomplete contract is not one that is unfinished, but rather one that does not completely specify every relevant aspect of the relationship. 6.5 Why do a good day’s work? Employment rents There are many reasons why people put in a good day’s work. For many people, doing a good job is its own reward, and doing anything else would contradict their work ethic. Even for those not intrinsically motivated to work hard, feelings of responsibility for other employees or for one’s employer may provide strong work motivation. For some employees, hard work is the appropriate way to respond to the employer for providing a job with good working conditions. In other cases, firms use performance-related pay to reward hard work. It is sometimes possible to identify teams of workers whose output is readily measured—for example, the percentage of on-time departures for airline staff—and pay a benefit to the whole group, to be divided among team members. These employees have a reason to work hard—they are paid for it. Fear of losing your job or not being promoted However, there is another reason to do a good job—the fear of being fired, or of missing the opportunity to be promoted into a position that has higher pay and greater job security. Laws and practices concerning the termination of employment for cause (that is, because of inadequate or low-quality work, not due to insufficient demand for the firm’s product) differ among countries. In some countries, the owners of the firm have the right to fire a worker whenever they choose, while in others, dismissal is difficult and costly. But, even in these cases, an employee has to fear the consequences of not meeting the standards that the employer sets. If this happened, she might lose her job if lower demand for the firm’s products results in workers being laid off. Read about the tactics used by some Japanese companies—in a culture of lifetime employment—to get unneeded employees to quit. Do workers care whether they lose their jobs? They would have no reason to care if they could immediately get another job at the same wage and working conditions. If this is not the case, then there is a cost to losing your job, as well as some benefits from not working. We list the benefits that would be lost if you lost your job and the costs of working in Figure 6.2. Benefits of your job (what you would lose if you lost it) Wage income Utility you get from your workplace friends and from the social status and psychological benefits from working Medical insurance (in some countries, this is tied to your job) or other benefits given by the firm (e.g. housing subsidies, company car, etc.) Costs of your job (what you would gain if you lost your job) Utility from the free time you gain when out of work (also called the disutility of working) Disutility from the social and psychological costs of job loss Unemployment benefit you would get if unemployed and searching for another job Cost of travel to work minus cost of searching for a new job Net cost of job loss = Benefits minus Costs This is the 'rent' you get from having your job and is called your ‘employment rent’ The net cost of job loss. Figure 6.2 The net cost of job loss. employment rent The economic rent a worker receives when the net value of her job exceeds the net value of her next best alternative (that is, being unemployed). Also known as: cost of job loss. In the final row of the table, we calculate the net cost of job loss. This tells you how much so-called ‘rent’ you gain from being in your job (compared with the next best alternative, which is being unemployed and searching for a new job). This amount, which you would lose if you lost your job, is called your employment rent. unemployment benefit A government transfer received by an unemployed person. Also known as: unemployment insurance utility A numerical indicator of the value that one places on an outcome, such that higher valued outcomes will be chosen over lower valued ones when both are feasible. gig economy An economy made up of people performing services matched by means of a computer platform with those paying for the service. Workers are paid for task performance and not per hour, are not legally recognized as employees of the company that owns the platform, and typically receive few if any benefits from the owners other than matching. People who lose their jobs can typically expect help from family and friends while they are out of work. Also, in many economies, people who lose their jobs receive an unemployment benefit or financial assistance from the government. They may be able to earn a small amount in self-employment or by taking odd jobs. In Figure 6.2, we used the concept of utility introduced in Unit 3. We can say that the worker’s utility is increased by the goods and services she can buy with her wage, but reduced by the unpleasantness of going to work and working hard all day—the disutility of work. Figure 6.3 takes the case of a worker called Maria and puts numbers on the hourly wage, the hourly unemployment benefit (at a level in this example of the equivalent of 50% of the wage for a 35-hour week) and a dollar value that she places as the cost of working for an hour (the disutility of work). To be specific, we assume Maria spends half her time at work working and the other half on social media. She reckons that working this hard is equivalent to a cost of $2 per hour to her. If she lost her job she would have$6 per hour unemployment benefit and could earn $2 per hour doing gig economy work (see Section 6.12) or doing work of that value around her house. Benefits of her job (what Maria would lose if she lost it) Example Wage income ($12 per hour)-unemployment benefit ($6 per hour) while searching for a job 12 - 6 =$6 Costs of her job (what Maria would gain if she lost it) Disutility of working ($2 per hour)$2 Four years later, they were still making $13,300 less than similar workers who had been making the same initial wage, but whose firms did not lay off their workers. In the five years that followed their layoff, they lost the equivalent of an entire year’s earnings. Many, of course, did not find work at all. They suffered even greater costs. The year 1982 was not a good time to be looking for work in Pennsylvania, but similar estimates (from the US state of Connecticut between 1993 and 2004, for example) suggest that, even in better times, employment rents are large enough that workers would worry about losing them. When workers receive employment rents, they are being paid more than in their next best alternative (being unemployed and looking for a job or, eventually, working for a different company). How does this benefit employers? Because the employee has something to lose, owners and managers can exert power over workers. The threat can be implicit or explicit, but it will make the worker perform in ways that she would not choose unless this was the case. Question 6.5 Choose the correct answer(s) Which of the following statements regarding employment rents are correct? • Higher employment rents make workers more likely to quit their jobs. • They are the costs you have to pay for your employment. • It equals the wage you receive in your employment. • Employers can use high employment rents to exert power over employees. • Workers would lose their employment rent if they quit. Therefore, higher rents make workers less likely to quit. • They are the economic rents of employment, which is the net cost of job loss if the next-best alternative is unemployment. • It is the net cost of job loss, which is not only the difference between the wage you receive in employment (minus the disutility of work) and the unemployment benefit, but also includes benefits such as firm-specific assets, medical insurance (if available), and the social status of being employed. • Employers can use the implicit threat of being fired to make the worker perform in ways that he would not choose unless he had something to lose. Question 6.6 Choose the correct answer(s) In which of the following employment situations would the employment rent be high, ceteris paribus? • in a job that provides many benefits, such as housing and medical insurance • in an economic boom, when the ratio of jobseekers to vacancies is low • when the worker is paid a high salary because she is a qualified accountant and there is a shortage of accountancy skills • when the worker is paid a high salary because the firm’s customers know and trust her • If the employee loses the job, all these benefits would be lost, so the economic rent from employment is high. • The cost of job loss is low because it would be easy to find another job. Therefore, the economic rent is low. • A qualified accountant will easily find another job at a similar salary, so the economic rent is low. • This worker is paid a high salary because there are aspects of her work that are specifically related to the firm and that will not be of value in other firms where she might work if she leaves. Other firms would pay a lower salary (at least initially) so the economic rent is high. 6.6 Work and wages: The labour discipline model When the cost of job loss (the employment rent) is large, workers will be willing to work harder in order to reduce the likelihood of losing the job. Holding constant other ways that it might influence the employment rent, a firm can increase the cost of job loss—and therefore the effort exerted by its employees—by raising wages. We refer to ‘effort’ as if it were a single thing, but of course what the firm needs to make profits is many dimensions of what an employee may do on the job, including physical effort, care, and not engaging in the kinds of vindictive sabotage that may have occurred in the Firestone plant in Decatur, Illinois. game A model of strategic interaction that describes the players, the feasible strategies, the information that the players have, and their payoffs. See also: game theory. We now represent this social interaction in the firm as a game played by the owners (through their managers) and the employees. As with other models, we ignore some aspects of their interaction to focus on what is important, following the principle that sometimes we see more by looking at less. On the stage of the firm, the cast of characters is just the owner (the employer) and a single worker, Maria. The game is sequential (one of them chooses first, like the ultimatum game that we saw in Section 3.2 of Unit 3) and is repeated in each period of employment. Here is the order of play: 1. The employer chooses a wage: This is based on his knowledge of how employees like Maria respond to higher or lower wages, and informs employees that they will be employed in subsequent periods at the same wage—as long as they work hard enough. 2. Maria chooses a level of work effort: This is in response to the wage offered, taking into account the costs of losing her job if she does not provide enough effort. The payoff for the employer is the profit. The greater Maria’s effort, the more goods or services she will produce, and the more profit he will make. Maria’s payoff is her net valuation of the wage she receives, taking into account the effort she has expended. Nash equilibrium A set of strategies, one for each player in the game, such that each player’s strategy is a best response to the strategies chosen by everyone else. If Maria chooses her work effort as a best response to the employer’s offer, and the employer chooses the wage that maximizes his profit given that Maria responds the way she does, their strategies are a Nash equilibrium. Employers typically hire work supervisors and may install surveillance equipment to keep watch over their employees, increasing the likelihood that the management will find out if a worker is not working hard and well. Here we will ignore these extra costs and just assume that the employer occasionally gets some information on how hard or well an employee is working. This is not enough to implement a piece-rate contract, but more than enough to fire a worker if the news is not good. Maria knows that the chance of the employer getting bad news decreases the harder she works. To decide on the wage to set, the employer needs to know how the employee’s work effort will respond to higher wages. We will consider Maria’s decision first. The employee’s best response Maria’s effort can vary between zero and one. We can think of this as the proportion of each hour that she spends working diligently (the rest of the time she is not working). An effort level of 0.5 indicates she is spending half the working day on non-work-related activities, such as checking Facebook, shopping online, or just staring out of the window. reservation wage What an employee would get in alternative employment, or from an unemployment benefit or other support, were he or she not employed in his or her current job. disutility of effort The degree to which doing some task (effort) is unpleasant. We will assume that Maria’s reservation wage is$6. Even if she put in no work whatsoever (and so endured no disutility of effort, spending all day on Facebook and daydreaming), her job at a $6 wage would be no better than being without work. Therefore, she would not care one way or the other if her job ended. Her best response to a wage of$6 would be zero effort. If Maria receives an unemployment benefit or income from any of these sources, it will partially offset the lost wage income. Let us suppose that, while Maria remains unemployed, she will receive a benefit equivalent to being paid $6 an hour for a 35-hour week. This is her reservation wageit is always available to her, so she would be indifferent between having a job that paid$6 an hour, and not working. What if she were paid a higher wage? For Maria, effort has a cost (the disutility of work) and a benefit (it increases the likelihood of her keeping the job, and the employment rent). In her choice of effort, she needs to find a balance between these two. A higher wage increases the employment rent and hence the benefit from effort, so it will lead her to choose a higher level of effort. Maria’s best response (the effort she chooses) will increase with the level of the wage chosen by the employer. worker’s best response function (to wage) The optimal amount of work that a worker chooses to perform for each wage that the employer may offer. Figure 6.5 shows the effort Maria chooses for each level of the wage, referred to as her best response curve, or worker’s best response function. (Just like the production functions in Unit 4, it shows how one variable, in this case effort, depends on another, the wage.) Maria’s best response to a wage offer depends on how long she would expect to be unemployed before getting a new job if she were to lose her job. We assume this is 10 months (roughly 44 weeks), as was typical among OECD countries in 2016. Maria’s best response to the wage. Point J refers to the information in Figure 6.3 (wage = $12, effort = 0.5). Figure 6.5 Maria’s best response to the wage. Point J refers to the information in Figure 6.3 (wage =$12, effort = 0.5). Effort per hour Effort per hour, measured on the vertical axis, varies between zero and one. Figure 6.5a Effort per hour, measured on the vertical axis, varies between zero and one. The relationship between effort and the wage If Maria is paid $6, she does not care if she loses her job because$6 is her reservation wage. This is why she provides no effort at a $6 wage. If she is paid more, she provides more effort. Figure 6.5b If Maria is paid$6, she does not care if she loses her job because $6 is her reservation wage. This is why she provides no effort at a$6 wage. If she is paid more, she provides more effort. The worker’s best response The upward-sloping curve shows how much effort she puts in for each value of the hourly wage on the horizontal axis. Figure 6.5c The upward-sloping curve shows how much effort she puts in for each value of the hourly wage on the horizontal axis. The effect of a wage increases when effort is low When the wage is low, the best response curve is steep—a small wage increase raises effort by a substantial amount. Figure 6.5d When the wage is low, the best response curve is steep—a small wage increase raises effort by a substantial amount. Diminishing marginal returns At higher levels of wages, however, increases in wages have a smaller effect on effort. Figure 6.5e At higher levels of wages, however, increases in wages have a smaller effect on effort. The employer’s feasible set The best response curve is the frontier of the employer’s feasible set of combinations of wages and effort that it gets from its employees. Figure 6.5f The best response curve is the frontier of the employer’s feasible set of combinations of wages and effort that it gets from its employees. The employer’s MRT The slope of the best response curve is the employer’s marginal rate of transformation of higher wages into more worker effort. Figure 6.5g The slope of the best response curve is the employer’s marginal rate of transformation of higher wages into more worker effort. Point J in Figure 6.5 represents the situation in Figure 6.3, as discussed at the end of the previous section. Maria’s reservation wage is $6, she is paid$12, and chooses effort of 0.5. The best response curve is concave, meaning that it becomes flatter as the wage and the effort level increase. This is because, as the level of effort approaches the maximum possible level, the disutility of effort becomes greater. In this case, it takes a larger employment rent (and hence a higher wage) to get effort from the employee. Seen from the standpoint of the owner or the employer, the best response curve shows how paying higher wages can elicit higher effort, but with diminishing marginal returns. In other words, the higher the initial wage, the smaller the increase in effort and output the employer gets from an extra $1 per hour in wages. The best response curve is the frontier of the feasible set of combinations of wages and effort the firm can get from its employees, and the slope of the frontier is the marginal rate of transformation of wages into effort. Why not pay the lowest wage? The lowest wage the firm could set for Maria would be the reservation wage,$6, where the best response curve hits the horizontal axis and effort is zero. We can see that the firm would never offer the lowest wage possible, because she would not exert any effort at work. We have drawn the best response function in Figure 6.5, under the assumption that unemployment is expected to last ten months. If the expected duration were to change, the best response function would change too. If economic conditions worsened, increasing unemployment duration, Maria’s employment rent would be higher. For any wage, her best response would be to exert a higher level of effort than that shown in Figure 6.5. Question 6.7 Choose the correct answer(s) Figure 6.5 depicted Maria’s best response curve when the expected duration of unemployment was 10 months. Which of the following statements are correct? • If the expected unemployment duration increased to 11 months, Maria’s best response to a wage of $12 would be an effort level above 0.5. • If the unemployment benefit were reduced, then Maria’s reservation wage would be higher than$6. • Over the range of wages shown in the figure, Maria would never exert the maximum possible effort per hour. • Increasing effort from 0.5 to 0.6 requires a bigger wage increase than increasing effort from 0.8 to 0.9. • Maria’s best response to a wage of $12 is 0.5, when expected unemployment duration is 10 months. If duration increases to 11 months, the cost of job loss is higher, so Maria will work harder for the same wage. • If the unemployment benefit were reduced, then the reservation wage would fall below$6. • The maximum level of effort would not be provided over the wage range shown. • When effort is lower, a smaller wage rise is required to increase it by 0.1. 6.7 Wages, effort, and profits in the labour discipline model Maria is not in the situation that Angela faced when Bruno could order her to work at the point of a gun. Maria has bargaining power because she can always walk away—an option that, initially, Angela did not have. Maria chooses how hard she works. But the employer can determine the conditions under which she makes that choice. The owners and managers know that they cannot get Maria to provide more effort than is given by the best response curve shown in Figure 6.5. The fact that the best response curve slopes upwards means that employers face a trade-off. They can only get more effort by paying higher wages. Firms maximize profits by minimizing costs of production To maximize their profits, firms want to minimize the costs of production. In particular, they want to pay the lowest possible price for inputs. A company purchasing oil for use in the production process will look for the supplier that can provide it at the lowest price per litre, or equivalently, supply the most oil per dollar. Likewise, Maria provides an input to production, and her employer would like to purchase it at the lowest price. But this does not mean paying the lowest possible wage. We already know that, if he paid the reservation wage, workers might show up (they wouldn’t care one way or the other), but they would not work if they did. The wage, w, is the cost to the employer of an hour of a worker’s time. But what matters for production is not how many hours Maria provides, but how many units of effort—effort is the input to the production process. If Maria chooses to provide 0.5 units of effort per hour and her hourly wage is w, the cost to the employer of a unit of effort is 2w. In general, if she provides e units of effort per hour, the cost of a unit of effort is w/e. So, to maximize profits, the employer should find a feasible combination of effort and wage that minimizes the cost per unit of effort, w/e. efficiency unit A unit of effort is sometimes called an efficiency unit. Another way to say the same thing is that the employer should maximize the number of units of effort (sometimes called efficiency units) that he gets per dollar of wage cost, e/w. The firm’s isocost curves for effort The upward-sloping straight line in Figure 6.6 joins together a set of points that have the same ratio of effort to wages, e/w. If the wage is $10 per hour and a worker provides 0.45 units of effort per hour, the employer gets 0.045 efficiency units per dollar. Equivalently, a unit of effort costs$10/0.45 = $22.2. The employer would be indifferent between this situation and one in which the wage is$20 with an effort of 0.9—the cost of effort is exactly the same at all points on the line. We will call this an isocost line for effort. These lines join points that have identical effects on the employer’s costs. We can also think of it as an indifference curve for the employer. The employer’s indifference curves: Isocost curves for effort. Figure 6.6 The employer’s indifference curves: Isocost curves for effort. An isocost line for effort If w = $10 and e = 0.45, e/w = 0.045. At every point on this line, the ratio of effort to wages is the same. The cost of a unit of effort is w/e =$22.22. Figure 6.6a If w = $10 and e = 0.45, e/w = 0.045. At every point on this line, the ratio of effort to wages is the same. The cost of a unit of effort is w/e =$22.22. The slope of the isocost line The line slopes upward because a higher effort level must be accompanied by a higher wage for the e/w ratio to remain unchanged. The slope is equal to e/w = 0.045, the number of units of effort per dollar. Figure 6.6b The line slopes upward because a higher effort level must be accompanied by a higher wage for the e/w ratio to remain unchanged. The slope is equal to e/w = 0.045, the number of units of effort per dollar. Other isocost lines On an isocost line, the slope is e/w, but the cost of effort is w/e. The steeper line has a lower cost of effort, and the flatter line has a higher cost of effort. Figure 6.6c On an isocost line, the slope is e/w, but the cost of effort is w/e. The steeper line has a lower cost of effort, and the flatter line has a higher cost of effort. Some lines are better for the employer than others A steeper line means lower cost of effort and hence higher profits for the employer. On the steepest isocost line, he gets 0.7 units of effort for a wage of $10 (at B), so the cost of effort is$10/0.7 = $14.29 per unit. On the middle line he only gets 0.45 units of effort at this wage, so the cost of effort is$22.22 and profits are lower. Figure 6.6d A steeper line means lower cost of effort and hence higher profits for the employer. On the steepest isocost line, he gets 0.7 units of effort for a wage of $10 (at B), so the cost of effort is$10/0.7 = $14.29 per unit. On the middle line he only gets 0.45 units of effort at this wage, so the cost of effort is$22.22 and profits are lower. The slope is the MRS The employer is indifferent between points on an isocost line. Like other indifference curves, the slope of the effort isocost line is the marginal rate of substitution—the rate at which the employer is willing to increase wages to get higher effort. Figure 6.6e The employer is indifferent between points on an isocost line. Like other indifference curves, the slope of the effort isocost line is the marginal rate of substitution—the rate at which the employer is willing to increase wages to get higher effort. Question 6.8 Choose the correct answer(s) Consider isocost lines drawn on a graph with hourly wage on the horizontal axis and effort per hour on the vertical axis. Which of the following statements is correct? • Isocost lines intersect the horizontal axis at the reservation wage. • For an isocost line with a slope of 0.07, the cost of a unit of effort is $14.3. • The slope of the isocost line is the employer’s marginal rate of transformation of higher wages into worker effort. • Steeper isocost lines represent higher cost per unit of effort. • Isocost lines go through the origin (zero wage, zero cost). • The slope of the isocost lines represents the units of effort per dollar of wage cost, which is the inverse of the cost per unit of effort. The cost of a unit of effort is therefore 1/0.07 =$14.3. • The slope of the isocost lines is the employer’s marginal rate of substitution, which is the rate at which the employer is willing to increase wages to get higher effort. • The slope of the isocost lines represents the units of effort per dollar of wage cost. Steeper isocosts therefore mean more units of effort per dollar of wage cost, or equivalently, a lower cost per unit of effort. To minimize costs, the employer will seek to reach the steepest isocost line for effort, where the cost of a unit of effort is lowest. (Remember, steeper isocost lines mean that a given increase in the wage will result in a larger increase in effort.) But the employer lacks the ability to dictate how much effort Maria puts in to her work, and so has to pick some point on Maria’s best response curve. tangency When two curves share one point in common but do not cross. The tangent to a curve at a given point is a straight line that touches the curve at that point but does not cross it. The best the employer can do is to set the wage at $12 on the isocost line that is tangent to Maria’s best response curve (point A). Use the analysis in Figure 6.7 to see how the employer sets the wage. The firm sets the ‘efficiency’ wage The employer sets the wage to minimize the cost of effort. Figure 6.7 The employer sets the wage to minimize the cost of effort. Minimizing the cost of effort To maximize profits, the employer wants to obtain effort at the lowest cost, and will seek to get onto the steepest isocost line possible. But, without the ability to dictate the level of effort, the employer must pick some point on the worker’s best response curve. Figure 6.7a To maximize profits, the employer wants to obtain effort at the lowest cost, and will seek to get onto the steepest isocost line possible. But, without the ability to dictate the level of effort, the employer must pick some point on the worker’s best response curve. C is not the best the employer can do Could choosing a point such as C be optimal? No. It is clear that, by paying more, the employer will benefit from a lower wage–effort ratio, because effort will increase more than proportionally to the wage. Figure 6.7b Could choosing a point such as C be optimal? No. It is clear that, by paying more, the employer will benefit from a lower wage–effort ratio, because effort will increase more than proportionally to the wage. Point A is the best the employer can do The best the employer can do is the isocost line that is just touching (tangent to) the worker’s best response curve. Figure 6.7c The best the employer can do is the isocost line that is just touching (tangent to) the worker’s best response curve. MRS = MRT At this point, the marginal rate of substitution (the slope of the isocost line for effort) is equal to the marginal rate of transformation of higher wages into greater effort (the slope of the best response function). Figure 6.7d At this point, the marginal rate of substitution (the slope of the isocost line for effort) is equal to the marginal rate of transformation of higher wages into greater effort (the slope of the best response function). Point B Points on steeper isocosts, such as Point B, would have lower costs for the employer but are infeasible. Figure 6.7e Points on steeper isocosts, such as Point B, would have lower costs for the employer but are infeasible. Minimum feasible costs Therefore,$12 is the hourly wage that the employer should set to minimize costs and maximize profits. Figure 6.7f Therefore, $12 is the hourly wage that the employer should set to minimize costs and maximize profits. In Figure 6.7, the employer will choose point A, offering a wage of$12 per hour to hire Maria, who will exert effort of 0.5. The employer cannot do better than this point—any point with lower costs, for example, point B, is infeasible. marginal rate of substitution (MRS) The trade-off that a person is willing to make between two goods. At any point, this is the slope of the indifference curve. See also: marginal rate of transformation. marginal rate of transformation (MRT) The quantity of some good that must be sacrificed to acquire one additional unit of another good. At any point, it is the slope of the feasible frontier. See also: marginal rate of substitution. The employer minimizes costs and maximizes profit at the point where the employer’s MRS (the slope of the indifference curve or isocost line) equals the MRT (the slope of the best response curve, which is the employer’s feasible frontier). This balances the trade-off the employer is willing to make between wages and effort against the trade-off the employer is constrained to make by Maria’s response. constrained choice problem This problem is about how we can do the best for ourselves, given our preferences and constraints, and when the things we value are scarce. See also: constrained optimization problem. efficiency wages The payment an employer makes that is higher than an employee’s reservation wage, so as to motivate the employee to provide more effort on the job than he or she would otherwise choose to make. See also: labour discipline model, employment rent. This is a constrained choice problem, similar to the one in Unit 4. There, individuals maximizing utility chose working hours where MRS = MRT, and the slope of their indifference curve equalled the slope of the feasible frontier determined by the production technology. When wages are set by the employer in this manner, they are sometimes called efficiency wages because the employer is recognizing that what matters for profits is e/w—the efficiency units per dollar of wage costs—rather than how much an hour of work costs. labour discipline model A model that explains how employers set wages so that employees receive an economic rent (called employment rent), which provides workers an incentive to work hard in order to avoid job termination. See also: employment rent, efficiency wages. What has the labour discipline model told us? • Equilibrium: In the owner–employee game, the employer offers a wage and Maria provides a level of effort in response. Their strategies are a Nash equilibrium. • Rent: In this allocation, Maria provides effort because she receives an employment rent that she might lose if she were to slack off on the job. • Power: Because Maria fears losing this economic rent, the employer is able to exercise power over her, getting her to act in ways that she would not do without this threat of job loss. This contributes to the employer’s profits. Exercise 6.3 The employer sets the wage Would any of the following affect Maria’s best response curve or the firm’s isocost lines for effort in Figure 6.7? If so, explain how. 1. The government decides to increase childcare subsidies for working parents but not for those unemployed. Assume Maria has a child and is eligible for the subsidy. 2. Demand for the firm’s output rises as celebrities endorse the good. 3. Improved technology makes Maria’s job easier. Question 6.9 Choose the correct answer(s) Figure 6.7 depicts the efficiency wage equilibrium of a worker and a firm. According to this figure: • Along the isocost line tangent to the best response curve, doubling of the per-hour effort from 0.45 to 0.90 would lead to an increased profit for the firm. • The slope of each isocost line is the number of units of effort per dollar. • At the equilibrium point, the marginal rate of transformation on the isocost line equals the marginal rate of substitution on the worker’s best response curve. • Points C and A both represent Nash equilibria because they are on the best response curve. • Along the isocost line, doubling effort requires doubling the wage. The cost of effort would not change, so profit would not change either. • Isocost lines have a constant ratio of effort to wage, e/w. Since e is on the vertical axis, and w is on the horizontal axis, the slope is e/w, which is the number of units of effort per dollar. • At the equilibrium point, the marginal rate of substitution between higher wage cost and higher effort on the isocost line equals the marginal rate of transformation of higher wages into greater effort on the worker’s best response curve. • At point C, the worker’s choice of effort is a best response if the employer chooses this wage. But the employer would not be doing the best he could, given the worker’s strategy for choosing effort, so it is not a Nash equilibrium. 6.8 Why there is always involuntary unemployment When we think about the implications of the labour discipline model for the whole economy, it tells us something else, which may at first seem surprising: involuntary unemployment A person who is seeking work, and willing to accept a job at the going wage for people of their level of skill and experience, but unable to secure employment is involuntarily employed. There must always be involuntary unemployment. Being unemployed involuntarily means not having a job, although you would be willing to work at the wage that other workers like you are receiving. In developing our model, we assumed that Maria could expect to be unemployed for ten months before receiving another wage offer at the same level. But the model implies that there must be an extended period of unemployment. To see why, try to imagine an equilibrium in the game between Maria and her employer, in which the employer pays her a wage of $12 per hour and, if she loses her job, she could immediately find another at the same wage. In that case, Maria’s employment rent would be zero. She would be indifferent between keeping the job and losing it. Therefore, her best response would be an effort level of zero. But this could not be an equilibrium—the employer would not pay$12 an hour to someone who did no work. Now imagine there were plenty of jobs available in the economy at $12 per hour and no one was unemployed. Immediately, you can see that this situation could not last. Employers would offer higher wages, say an extra$4 per hour, to ensure that their workers had something to lose and would therefore work hard. But, after offering higher wages, they would not be able to offer as many jobs. Workers who lost their jobs would no longer be able to find new ones easily. Jobs would be scarce, and it might take weeks or months to find another. The economy would move to a new equilibrium with higher wages and involuntary unemployment. Employees would be earning $16 an hour and those who lost their jobs would be willing to accept another at$16, but they would not immediately be able to find one. In equilibrium, both wages and involuntary unemployment have to be high enough to ensure that there is enough employment rent for workers to put in effort. Unemployment is an important concern for voters, and so for the policymakers who represent them. We can use this model to see how policies that governments pursue to alter the level of unemployment, or to provide income to unemployed workers, will affect the profits of firms and the effort level of their employees. 6.9 Putting the model to work: Owners, employees, and public policy Until now we have considered how the employer chooses a point on the best response function. But changes in economic conditions or public policies can shift the entire best response function, moving it to the right (or up) or to the left (or down). The worker’s incentive to choose a high level of effort depends on how much she has to lose (the employment rent), but also the likelihood of losing it. The position of the best response function depends on: • the utility of the things that can be bought with the wage • the disutility of effort • the reservation wage • the probability of getting fired when working at each effort level If there are changes in any of these factors, the best response curve will shift. Public policy can affect the position of the best response function. Unemployment affects the wage firms set Imagine how an increase in the unemployment rate affects the best response curve. When unemployment is high, workers who lose their jobs can expect a longer spell of unemployment. Recall that unemployment benefits (including support from family and friends) are limited, so the longer the expected spell of unemployment, the lower the level of the unemployment benefit per hour (or per week) of lost work. An increase in the duration of a spell of unemployment has two effects: • It reduces the reservation wage: This increases the employment rent per hour. • It extends the period of lost work time: This increases to total cost of job loss and hence total employment rents. Figure 6.8 shows the effects on the best response curve of a rise in unemployment, and also of a rise in unemployment benefits. The best response curve depends on the level of unemployment and the unemployment benefit. Figure 6.8 The best response curve depends on the level of unemployment and the unemployment benefit. The status quo The position of the best response curve depends on the reservation wage. It crosses the horizontal axis at this point. Figure 6.8a The position of the best response curve depends on the reservation wage. It crosses the horizontal axis at this point. The effect of unemployment benefits A rise in the unemployment benefit increases the reservation wage and shifts the worker’s best response curve to the right. Figure 6.8b A rise in the unemployment benefit increases the reservation wage and shifts the worker’s best response curve to the right. An increase in unemployment If unemployment rises, the expected duration of unemployment increases. Therefore, the worker’s reservation wage falls, and the best response curve shifts to the left. Figure 6.8c If unemployment rises, the expected duration of unemployment increases. Therefore, the worker’s reservation wage falls, and the best response curve shifts to the left. Effort changes for each wage For a given hourly wage, say $18, workers put in different levels of effort when the levels of unemployment or unemployment benefit change. Figure 6.8d For a given hourly wage, say$18, workers put in different levels of effort when the levels of unemployment or unemployment benefit change. Economic policies affect the wage firms set A rise in the level of unemployment shifts the best response curve to the left: • For a given wage, say \$18, the amount of effort that the worker will provide increases, improving the profit-making conditions for the employer. • The wage that the employer would have to pay to get a given effort level, say 0.6, decreases. A rise in unemployment benefits shifts the best response curve to the right, so it has the opposite effects. Economic policies can alter both the size of the unemployment benefit and the extent of unemployment (and hence the duration of a spell of unemployment). These policies are often controversial. • Workers are favoured by a rightward shift of the worker’s best response function: This may be a result, for example, of more generous unemployment benefits. They will put in less effort for any given wage, • Employers are favoured by a leftward shift: This may be a result, for example, of higher unemployment. They will acquire the effort of their workers at a lower cost, raising profits. Exercise 6.4 Effort and wages Suppose that, with the status quo best response curve in Figure 6.8, the firm chooses the wage to minimize the cost of effort, and the worker’s best response is an effort level of 0.6. If unemployment rose: 1. Would effort be higher or lower than 0.6 if the firm did not change the wage? 2. How would the firm change the wage if it wanted to keep the effort level at 0.6? 3. How would the wage change if the firm minimized the cost of effort at the new unemployment level? How economists learn from data Workers speed up when the economy slows down The idea that employment rents are an incentive for employees to work harder is illustrated in a study by Edward Lazear (an economic advisor to former US President George W. Bush) and his co-authors. They investigated a single firm during the global financial crisis, to see how the managers and workers reacted to the turbulent economic conditions. The firm specialized in technology-based services—such as insurance-claims processing, computer-based test grading, and technical call centres—and operated in 12 US states. The nature of the work made it easy for the firm’s management to track the workers’ productivity, which is a measure of worker effort. It also allowed Lazear and his colleagues to use the firm’s data from 2006–2010 to analyse the effect on worker productivity of the worst recession since the Great Depression. When unemployment rose, workers could expect a longer spell of unemployment if they lost their jobs. Firms did not use their increased bargaining power to lower wages as they could have, fearing their employees’ reaction. Lazear and his co-authors found that, in this firm, productivity increased dramatically as unemployment rose during the financial crisis. One possible explanation is that average productivity increased because management fired the least productive members of the workforce. But Lazear found that the effect was more due to workers putting in extra effort. The severity of the recession raised the workers’ employment rent for any given wage, and they were therefore willing to work harder. We would predict from our model that the best response curve would have shifted to the left as a result of the recession. This meant that (unless employers lowered wages substantially) workers would work harder. Apparently, this is what happened.15 Our model shows that employers could have cut wages, while sustaining an employment rent sufficient to motivate hard work. An earlier recession provided another insight that helps to explain their reluctance to reduce wages in the crisis. Truman Bewley, an economist, was puzzled when he saw only a handful of firms in the northeast of the US cutting wages during the recession of the early 1990s. Most firms, like the one Lazear’s team studied, did not cut their wages at all. Bewley interviewed more than 300 employers, labour leaders, business consultants, and careers advisors in the northeast of the US. He found that employers chose not to cut wages because they thought it would hurt employee morale, reducing productivity and leading to problems of hiring and retention. They thought it would ultimately cost the employer more than the money they would save in wages.16 Exercise 6.5 Lazear’s results Use the best response diagram to sketch the results found by Lazear and co-authors in their study of a firm during the global financial crisis. 1. Draw a best response curve for each of the following years and explain what it illustrates: (a) the precrisis period (2006) (b) the crisis years (2007–8) (c) the postcrisis year (2009) Assume that the employer did not adjust wages. 2. Is there a reason why a firm might not cut wages during a recession? Think about the research of Truman Bewley and the experimental evidence about reciprocity in Unit 4. Question 6.10 Choose the correct answer(s) Which of the following statements are true? • If unemployment benefits are increased, the minimum cost of a unit of effort for the employer will rise. • If the wage doesn’t change, employees will work harder in periods of high unemployment. • If workers continue to receive benefits however long they remain unemployed, an increase in the level of unemployment will have no effect on the best response curve. • If an employee’s disutility of effort increases, the reservation wage will rise. • An increase in unemployment benefits shifts the best response curve to the right. The employer will no longer be able to reach the isocost line tangent to the original best response curve, so the cost of effort must rise. • In periods of high unemployment, the cost of job loss is higher. At any given wage level, employees will choose higher effort to reduce the chance of losing their jobs. • In this case, an increase in the level of unemployment would not affect the reservation wage, but it would increase the cost of job loss, so the best response curve will change. • At the reservation wage, the employee is indifferent between employment and unemployment, and would exert no effort. A change in the disutility of effort would therefore have no effect. 6.10 Why do employers pay employment rents to their workers? When a firm is deciding on how to run its plant, it looks for the lowest cost available as we saw in Section 6.7. If we were to observe a firm paying 0.15 euros per kilowatt hour of electricity when it could be purchased for 0.11 euros, we would conclude that, for some reason, the firm was not minimizing its costs and therefore could not be maximizing its profits. It would be throwing away money. When workers receive employment rents, they value having their jobs more than their next best alternative (being unemployed and searching for and eventually taking some other job). This means that the employer is paying more than the minimum that would induce the worker to prefer taking the job over remaining unemployed. Isn’t this just a gift from employer to worker? How does this benefit employers? We have explained one reason: • The fact that workers receive a rent that can be withdrawn by the employer (by firing the worker) provides a powerful motivation for the employee to work hard and well. There are other ways that paying a rent to the worker contributes to the profits of the firm. • The rent makes the worker more likely to stay with the firm: If she were to quit the job, the firm would bear the cost of recruiting and training someone else. • The rent expands the pool of job applications among whom the firm can choose: If the unemployed are similar to the employees of the firm, the larger the rent received by workers, the more attractive the firm will be to currently unemployed workers. A result will be that, when the firm wishes to expand or replace workers who retire, quit, or are fired, they will be able to choose from a larger selection of job applicants. • The rent may be experienced as a gift that is reciprocated by the worker: Though it benefits the owner of the firm (for the above reasons), the rent received by the worker may be experienced by the employee as an act of generosity that would be reciprocated by hard work and loyalty to the firm and its owners. What, then, is the difference between labour and kilowatt hours, or office furniture, or any of the other non-labour inputs of the firm, or any of the other goods or services for which no firm would ever pay more than the minimum? The answer is simple—for electricity, all that matters is that the good or service is delivered to the firm. But for the firm, making a profit is not as simple as getting the worker to show up for work, as Firestone Tire and Rubber Company discovered when the strikers at Decatur returned to work. Because the labour contract is incomplete, as we have seen in Section 6.4, the worker must also be motivated to work hard, and stay with the firm. Question 6.11 Choose the correct answer(s) Which of the following are correct explanations of why, when purchasing electricity, the firm will always seek the cheapest provider, but it will not pay an employee the lowest amount that would motivate the worker to accept the job? • A rent provides a power motivation for the employee to work hard and well. • There are many competitors supplying electricity, but this is not the case for labour. • Electricity (or equivalent forms of power) represent a larger share of the firm’s costs than does labour. • Paying the employee more makes it more likely that she will remain with the firm. • This is the definition of the employment rent that forms part of the employment contract. • There are many competing suppliers of labour, but the firm will always pay above the reservation wage; among the suppliers of electricity, however many there are, the firm will seek the lowest cost provider. • Cost minimization could justify why the firm seeks the cheapest electricity provider, but does not explain why it does not pay the employee their reservation wage. If a firm was seeking to minimize costs, they would want to pay their employees as little as possible. • By raising the cost of job loss, the employment rent paid by the employer makes it more likely the employee will stay at the firm. 6.11 Another kind of business organization: Cooperative firms worker-owned cooperative A form of business in which a substantial fraction of the capital goods are owned by employees rather than being owned by those who are not involved in production in the firm; worker-owners typically elect a manager to make day to day decisons. cooperative firm A firm that is mostly or entirely owned by its workers, who hire and fire the managers. Even in capitalist economies, some business organizations have an entirely different structure to the one we have been analysing—their workers are the owners of the capital goods and other assets of the company, and they select managers who run the company on a day-to-day basis. This form of business organization is called a worker-owned cooperative or cooperative firm. During the twentieth century, worker-owned plywood producers successfully competed with traditional capitalist firms in the US. John Pencavel. 2002. Worker Participation: Lessons from the Worker Co-ops of the Pacific Northwest. New York, NY: Russell Sage Foundation Publications. The knowledge-based economy is creating new forms of firms, neither capitalist nor worker-owned. Tim O’Reilly and Eric S. Raymond. 2001. The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Sebastopol, CA: O’Reilly. One well-known example of a cooperative is the British retailer John Lewis Partnership, founded in 1864 and held in trust for its employees since 1950. Every employee is a partner, and employee councils elect five out of seven members of the company board. The benefits for employees (pension, paid holidays, long-service sabbaticals, and social activities) are generous, and the business’s profits are shared out as a bonus, calculated as a percentage of each person’s salary every year. John Lewis has been one of the UK’s most consistently successful retail businesses. Cooperatives have fewer supervisors Worker-owned cooperatives are hierarchically organized, like conventional firms, but the directives issued from the top of the hierarchy come from people who owe their jobs to the worker-owners. Other than this, the main differences between conventional firms and worker-owned cooperatives are that the cooperatives need fewer supervisors and other management personnel to ensure that the worker-owners work hard and well. Fellow worker-owners will not tolerate a shirking worker because the shirker is reducing the profit share of the other workers. Reduced need for supervision is among the reasons that worker-owned cooperatives produce at least as much per hour (if not more) than their conventional counterparts. There are typically fewer inequalities in wages and salaries within the company in worker-owned cooperatives than in conventional firms, for example between managers and production workers. And worker-owned cooperatives tend not to lay off workers when the economy goes into recession, offering their worker-owners a kind of insurance (often they cut back on the hours of all workers rather than terminating the employment of some). Case studies show that, in those unusual companies owned primarily by the workers themselves, work is done more intensely with less supervision. There have been many attempts to establish other types of business organization throughout recent history, but borrowing the funds to start and sustain worker-owned companies is often difficult because, as we will see in Unit 9, banks are often reluctant to lend funds (except at high interest rates) to people who are not wealthy. Exercise 6.6 A worker-owned cooperative In Figure 6.1 we showed the actors and decision-making structure of a typical firm. 1. How do the actors and decision-making structure of John Lewis Partnership differ from that of a typical firm? 2. Redraw Figure 6.1 to illustrate your answer to Question 1. Question 6.12 Choose the correct answer(s) Which of the following statements regarding a cooperative firm such as John Lewis are correct? • The firm is owned by shareholders, some of whom are employees. • Workers typically exert more effort, despite having less supervision. • During a downturn, the firm tends to reduce working hours for all workers, rather than fire some workers. • Profits are not paid out to the owners but retained within the firm for future investment. • The firm is owned by the employees, every one of whom is a partner. • This is true in a cooperative firm, where fellow worker-owners do not tolerate a shirking colleague. • This is true in a cooperative firm, where the effects of a downturn are shared among the worker-owners. • A significant proportion of the profits is retained for future investment, while the rest is shared out as a bonus between the employees, who are the owners. Exercise 6.7 Using Excel: Who owns the firms? In this unit, we learned that owners are at the top of the ownership structure of a ‘typical’ firm, with managers in the middle and employees at the bottom of the hierarchy. We will now look at the characteristics of firm owners in different countries. The data you will be using is from the World Bank’s Enterprise Surveys, which collect information about characteristics of firms around the world and the business environment they face. Download the firm ownership data. There are four variables related to ownership: • proportion of private domestic ownership in a firm • percentage of firms with at least 10% of foreign ownership • percentage of firms with at least 10% of government/state ownership • percentage of firms with legal status of sole proprietorship (these firms are not a legally separate entity from their owner, and cannot sell shares to raise funds for the business) 1. Choose two countries in the dataset and filter the data so that only the entries with ‘All’ for the variable ‘Subgroup Level’ are visible. For each country, plot a column chart showing these four variables, with percentages on the vertical axis and sector (manufacturing or services) on the horizontal axis. Make sure to include a legend and label the axes appropriately. (If your chosen country has more than one year of available data, plot a separate column chart for each year). For help on how to filter the data and draw a column chart, go back to Exercise 1.3 in Unit 1. 2. The variable ‘Subgroup’ shows the same four variables, but for different subsectors, firm sizes, regions within a country, exporting behaviour, and gender of the top manager. Choose one subgroup and plot a separate column chart for each country, showing these four variables, with percentages on the vertical axis and the variable ‘Subgroup Level’ (containing the name of each subgroup) on the horizontal axis. 3. Suggest some explanations for any similarities or differences in firm ownership across countries and/or time that you observe in your charts from Questions 1 and 2. (You may find it helpful to research the institutions and policies related to business activity in your chosen countries.) Great economists John Stuart Mill John Stuart Mill (1806–1873) was one of the most important philosophers and economists of the nineteenth century. His book, On Liberty (1859),17 parallels Adam Smith’s Wealth of Nations in advocating limits on governmental powers, and is still an influential argument in favour of individual freedom and privacy. Mill thought that the structure of the typical firm was an affront to freedom and individual autonomy. In The Principles of Political Economy (1848), Mill described the relationship between firm owners and workers as an unnatural one: To work at the bidding and for the profit of another, without any interest in the work … is not, even when wages are high, a satisfactory state to human beings of educated intelligence … Attributing the conventional employer–employee relationship to the poor education of the working class, he predicted that the spread of education, and the political empowerment of working people, would change this situation: The relation of masters and work-people will be gradually superseded by partnership … perhaps finally in all, association of labourers among themselves.18 Exercise 6.8 Was Mill wrong? Why do you think Mill’s vision of a postcapitalist economy of worker-owned cooperatives has not yet occurred? 6.12 Another kind of business organization: The gig economy gig economy An economy made up of people performing services matched by means of a computer platform with those paying for the service. Workers are paid for task performance and not per hour, are not legally recognized as employees of the company that owns the platform, and typically receive few if any benefits from the owners other than matching. A ‘gig’ for a musician or comedian is a single appearance for which they will be paid not by the hour, but an agreed sum for the performance. The gig economy is not about jokes and tunes, however; it refers to the combined activities of Uber or Lyft drivers, TaskRabbits, Upworkers, Mechanical Turkers, and others who transport people and goods, home-assemble online purchased furniture, and perform other well-defined tasks for which they are paid a fixed rate. Gig workers do their jobs independently, not as members of a team, and gain access to their ‘gigs’ by means of a two-sided digital platform that connects those who will pay for the work, and those who perform it. The gig economy provides an illuminating contrast with the model of labour discipline studied in this unit. The key difference is that there is no labour discipline problem because the tasks performed are sufficiently well-defined that a virtually complete contract is possible—if you want to go from your hotel to the airport, your Lyft driver does not get paid. If you hire someone to assemble your flat-packed furniture, the TaskRabbit worker who was putting it together does not make any money until it is put together properly. An important feature of the gig economy is that the only way that workers can get gigs is through the platforms owned by a few firms—for example Uber and Lyft for taxi services, TaskRabbit for tasks around the home, or Mechanical Turk for small administrative jobs. This means that those performing the gigs have no real bargaining power—if a TaskRabbit worker objects to the terms, there will always be another Tasker to do the job. The worker who refused the job will be unlikely to find better gigs on that platform. The digital platform that allows those who need a gig performed to connect to those performing the gigs makes substantial mutual benefits possible by putting together gig workers—who have free time and the skills, a vehicle, or other equipment required—with those willing to pay for a completed gig. But because there are few platforms and many gig workers, they typically receive very little pay for often difficult and onerous work. A result is that gig performers in this economy face extraordinary economic insecurity—they are not guaranteed a fixed schedule of hours and pay, nor do they receive health insurance benefits, maternity leave, holiday pay, or pension contributions through their employer. Why does TaskRabbit, for example, not pay enough for gigs so that the Taskers receive an employment rent, as would be the case in the typical office or plant described by the labour discipline model? The answer is that they do not need to pay more than the gig worker’s next best alternative. They do not need to motivate the worker to do the job—if it is not done, the worker will not be paid. A result is that the gig economy can often produce services at a lower cost and price than are available from conventional firms. The gig economy affects employees in conventional firms The gig economy is a small portion even of those high-income economies where, for example, ride services like Uber and Lyft have competed successfully against conventional taxi firms. It has at least three effects on employees working in conventional firms: • They may benefit as consumers: The firm may become more profitable by taking advantage of low-cost ways of getting tasks such as taxi services, deliveries or repairs, that had previously been done by conventional suppliers. This may be a positive effect for workers. • It may provide an additional source of income should the worker lose a job: Working as an occasional Rabbit Tasker or Deliveroo rider, for example, is an alternative to unemployment benefits. This would be a positive effect, because it could improve the worker’s reservation position. • The gig economy may make it more difficult for some types of worker to find reemployment: For example, a driver for a delivery service, who loses a job under a conventional labour contract, may find fewer similar jobs will be open in future. This would be a negative effect, as it would be likely to extend the expected length of a spell of unemployment. 6.13 Principals and agents: Interactions under incomplete contracts In the relationship between Maria and her employer, Maria’s work effort matters to both parties but is not covered by the employment contract. This leads to the existence of employment rents. If they had been able to write a complete contract, the situation would have been quite different. The employer could have offered her an enforceable contract, specifying both the wage and the exact level of effort she should provide, and if these terms were acceptable to her, she would have agreed and worked as required. To maximize profit, the employer would have chosen a contract that was only just acceptable, so Maria would not have earned any rents. This example is not unusual. In practice, all employment relationships are governed by incomplete contracts. Employment contracts often do not even bother to mention that the worker should work hard and well. By contrast, the way we have described the gig economy means that a gig worker does not have an employment contract with a firm. The nature of work in the gig economy is the subject of legal battles in many countries. Why are contracts incomplete? Thinking about some examples of economic interactions, we can see that there are several reasons for the absence of a complete contract: • Information is not verifiable: For a contract to be enforceable, relevant information must be observable by both parties, but also verifiable by third parties such as courts of law. The court must be able to establish whether or not the requirements of the contract were met. Verifiable information is often unavailable; for example, it may be impossible to prove whether the poor condition of a rented apartment is due to normal wear-and-tear or the tenant’s negligence. • Time and uncertainty: A contract is generally executed over a period of time—for example, specifying that Party A does X now and Party B does Y later. But what B should do later may depend on things that are unknown when the contract is written. People are unlikely to be able to anticipate every possible thing that might happen in future—and trying to do so would probably not be cost effective. • Measurement: Many services and goods are inherently difficult to measure or describe precisely enough to be written into a contract. How would the restaurant owner measure how pleasantly his waiters interact with customers? • Absence of a judiciary: For some transactions, there are no judicial institutions (courts or other relevant third parties) capable of enforcing contracts. Many international transactions are of this type. • Preferences: Even where the nature of the goods or services to be exchanged would permit a more complete contract, a less complete contract might be preferred. Intrusive surveillance of workers may backfire if the employer’s distrust angers the workers, leading to less satisfactory work performance. You do not necessarily want to know the exact quality of a concert before you buy the ticket—discovering it may be part of the experience. Principal–agent models principal–agent relationship This is an asymmetrical relationship in which one party (the principal) benefits from some action or attribute of the other party (the agent) about which the principal’s information is not sufficient to enforce in a complete contract. See also: incomplete contract. Also known as: principal–agent problem. In the case of Maria and her employer, the employer is called the principal. The employer would like to offer Maria, the agent, an employment contract and she wants the job, but the amount of effort she will provide cannot be specified in the contract because it is not verifiable. The relationship between the two actors within the firm is an example of a principal–agent relationship. Our model of Maria’s employment is an example of a principal–agent model, in which an action taken by the agent is ‘hidden’ from the principal, or ‘unobservable’. • The agent: This actor takes some action, such as working hard. • The principal: This actor benefits from this action. • A conflict of interest: This action is something the agent would not choose to do, perhaps because it is costly or unpleasant. • A hidden action: Information about this action is either not available to the principal or is not verifiable. • An incomplete contract: There is no way that the principal can use an enforceable contract to guarantee that the action is performed. hidden attributes (problem of) This occurs when some attribute of the person engaging in an exchange (or the product or service being provided) is not known to the other parties. Example: an individual purchasing health insurance knows her own health status, but the insurance company does not. Also known as: adverse selection. See also: hidden actions (problem of). In short, a hidden action problem occurs when there is a conflict of interest, between a principal and an agent, over some action that may be taken by the agent, and this action cannot be subjected to a complete contract. Incomplete contracts are the rule, not the exception, in the economy Using the lens of the principal–agent relationship, we can see many other ways in which we interact in the economy and society without a complete contract: • Banks lend money to borrowers in return for a promise to repay the full amount plus the stipulated interest: But this may be unenforceable if the borrower is unable to repay. • Owners of firms would like managers to maximize the value of the owners’ assets: But managers also have things they like, which reduce the owners’ wealth (such as flying first class, or really expensive office furniture). Managerial contracts often cannot specify an enforceable requirement to maximize the owners’ wealth. • Landlords rent apartments to tenants who sign contracts that require they maintain the value of the property: But aside from gross neglect, the liability for not maintaining the property is unenforceable. • Insurance companies ask people to sign contracts that require they should behave prudently: Insurance contracts require (but typically cannot enforce) that people who are insured do not take unreasonable risks. • Families purchase education and health services in many countries: But the quality of the service that will be provided to citizens is rarely specified in a contract (and would be unenforceable if it were). • Parents care for their children: They hope their children will take care of them when they are old and unable to work, but our children do not sign a contract that ensures this will happen. The table in Figure 6.9 identifies the principals and agents in the examples from this section. Principal Agent Action that is hidden, and not covered in the contract Employer Employee Quality and quantity of work Banker Borrower Repayment of loan, prudent conduct Owner Manager Maximization of owners’ profits Landlord Tenant Care of the apartment Insurance company Insured Prudent behavior Parents Teacher/doctor Quality of teaching and care Parents Children Care in old age Figure 6.9 Hidden action problems. Emile Durkheim (1858–1917), the founder of modern sociology, observed that ‘not everything in the contract is contractual.’ There is usually something that matters to at least one of the parties that cannot be written down in an enforceable contract. Exercise 6.9 Principal–agent relationships For each of the following examples, explain who is the principal, who is the agent, and what aspects of their interaction are of interest to each and are not covered by a complete contract. 1. A company hires a security guard to protect its premises at night. 2. A charity wants to commission research to find out as much as possible about a new virus. Question 6.13 Choose the correct answer(s) Which of the following statements correctly identify who is the principal and who is the agent? • In a public limited company, the managers are the principals and the shareholders are the agents. • In a contract between a football club and its star player, the club is the principal and the player is the agent. • In an Airbnb contract, the owner and the traveller are both principals and agents. • In a contract to buy an essay from an online provider, the essay writer is the principal and the student is the agent. • The shareholders do not have the information to verify whether the managers are working in their interest. Therefore, the shareholders are the principals and the managers are the agents. • The club is not guaranteed to receive the returns worthy of the hundreds of thousands of pounds a week paid to its star player. • The quality of the room rented is not guaranteed, and in this case the traveller is the principal and the owner is the agent. But the care of the room is also not guaranteed, in which case the owner is the principal and the traveller is the agent. • The student does not know whether the essay bought will bring him the top grade. Therefore, the student is the principal and the essay writer is the agent. Question 6.14 Choose the correct answer(s) Which of the following are measures that can reduce principal–agent problems? • paying part of the chief executive’s bonus as company shares rather than cash • a black box in the car that measures the speed of the driver • travellers who submit an insurance claim must pay the first £100 of the amount claimed out of pocket (the insurer will not reimburse this £100) • increasing the number of workers in a factory • This is a measure to realign the interests of the chief executive with those of the shareholders. • This is a measure to reduce the moral hazard of drivers, who may drive more recklessly if they have car insurance. • This is a measure to reduce the moral hazard of travellers who have travel insurance from being more careless with their luggage. • This makes monitoring more difficult and worsens the principal–agent problem. 6.14 Conclusion In a capitalist economy, the division of labour is coordinated both by markets (which contribute to the decentralization of power) and by firms (which concentrate power in the hands of owners). To understand the role of the firm, we view it not only as an actor, but also a stage on which owners, managers and employees interact in principal–agent relationships. While the separation of ownership and control makes it possible for the objectives of owners and managers to diverge, incentive schemes may help align interests. The problem of hidden actions becomes especially evident in the worker–employer relationship, characterized by incomplete contracts. These arising due to the fact that employees’ effort is neither perfectly observable nor verifiable, and their tasks depend on unforeseeable future events. It is this contractual incompleteness that causes the wages offered to workers to be above their reservation wage, giving rise to an employment rent that incentivizes them to exert effort. Drawing on our game theory and constrained choice tools, we have developed the labour discipline model to study the worker–employer interaction as a sequential, repeated game where the employer offers a wage and the worker responds by choosing the amount of effort she exerts: Worker Employer The worker’s best response function shows the optimal level of effort the worker chooses to exert at a given wage. It slopes upward because a higher wage increases the cost of job loss, inducing the worker to put in more effort. It is concave because the disutility of effort is greater at higher effort levels, causing diminishing marginal returns to wages. Its slope is the marginal rate of transformation (MRT) of wages into effort. Isocost lines can be seen as indifference curves. They join the set of points with the same ratio of effort to wages, which the employer seeks to maximize in order to minimize the wage cost per efficiency unit. A steeper isocost line represents a lower cost of effort. The slope of an isocost represents the rate at which the employer is willing to increase wages to get higher effort (the MRS). Optimal Choice: MRT = MRS The labour discipline model. Figure 6.10 The labour discipline model. The point of tangency represents a Nash equilibrium, and wages set this way are known as efficiency wages. The main implication for the broader economy is that there is always involuntary unemployment in equilibrium. We can analyse the effects of public policies by considering how these affect the worker’s best response function. We have also looked at worker-owned cooperatives, where typically less supervision is necessary, and considered the implications of the modern gig economy, including for people currently working as employees in firms. division of labour The specialization of producers to carry out different tasks in the production process. Also known as: specialization. principal–agent relationship This is an asymmetrical relationship in which one party (the principal) benefits from some action or attribute of the other party (the agent) about which the principal’s information is not sufficient to enforce in a complete contract. See also: incomplete contract. Also known as: principal–agent problem. separation of ownership and control The attribute of some firms by which managers are a separate group from the owners. hidden attributes (problem of) This occurs when some attribute of the person engaging in an exchange (or the product or service being provided) is not known to the other parties. Example: an individual purchasing health insurance knows her own health status, but the insurance company does not. Also known as: adverse selection. See also: hidden actions (problem of). incomplete contract A contract that does not specify, in an enforceable way, every aspect of the exchange that affects the interests of parties to the exchange (or of others). reservation wage What an employee would get in alternative employment, or from an unemployment benefit or other support, were he or she not employed in his or her current job. employment rent The economic rent a worker receives when the net value of her job exceeds the net value of her next best alternative (that is, being unemployed). Also known as: cost of job loss. labour discipline model A model that explains how employers set wages so that employees receive an economic rent (called employment rent), which provides workers an incentive to work hard in order to avoid job termination. See also: employment rent, efficiency wages. worker’s best response function (to wage) The optimal amount of work that a worker chooses to perform for each wage that the employer may offer. employment rent The economic rent a worker receives when the net value of her job exceeds the net value of her next best alternative (that is, being unemployed). Also known as: cost of job loss. disutility of effort The degree to which doing some task (effort) is unpleasant. marginal rate of transformation (MRT) The quantity of some good that must be sacrificed to acquire one additional unit of another good. At any point, it is the slope of the feasible frontier. See also: marginal rate of substitution. isocost line A line that represents all combinations that cost a given total amount. efficiency unit A unit of effort is sometimes called an efficiency unit. marginal rate of substitution (MRS) The trade-off that a person is willing to make between two goods. At any point, this is the slope of the indifference curve. See also: marginal rate of transformation. efficiency wages The payment an employer makes that is higher than an employee’s reservation wage, so as to motivate the employee to provide more effort on the job than he or she would otherwise choose to make. See also: labour discipline model, employment rent. involuntary unemployment A person who is seeking work, and willing to accept a job at the going wage for people of their level of skill and experience, but unable to secure employment is involuntarily employed. worker-owned cooperative A form of business in which a substantial fraction of the capital goods are owned by employees rather than being owned by those who are not involved in production in the firm; worker-owners typically elect a manager to make day to day decisons. gig economy An economy made up of people performing services matched by means of a computer platform with those paying for the service. Workers are paid for task performance and not per hour, are not legally recognized as employees of the company that owns the platform, and typically receive few if any benefits from the owners other than matching. 6.15 Doing Economics: Measuring management practices In Section 6.1, we discussed the top-down decision-making structure in most firms, which involves managers directing the activities of their employees to implement the long-term strategies decided by owners. We might expect that firms where employees and production processes are managed well will be more productive than poorly-managed firms. However, defining and quantifying ‘good’ management practices is a challenge. In Doing Economics Empirical Project 6, we look at some ways to measure the quality of a firm’s management practices, and make comparisons across countries, industries, and types of firms, and discuss possible explanations for the patterns we observe. Go to Doing Economics Empirical Project 6 to work on this project. Learning objectives In this project you will: • explain how survey data is collected, and describe measures that can increase the reliability and validity of survey data • use column charts and box and whisker plots to compare distributions • calculate conditional means for one or more conditions, and compare them on a bar chart • construct confidence intervals and use them to assess statistical significance • evaluate the usefulness and limitations of survey data for determining causality. 6.16 References • Bewley, Truman F. 1999. Why Wages Don’t Fall during a Recession. Cambridge, MA: Harvard University Press. • Braverman, Harry, and Paul M. Sweezy. 1975. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. 2nd ed. New York, NY: Monthly Review Press. • Coase, Ronald H. 1937. ‘The Nature of the Firm’. Economica 4 (16): pp. 386–405. • Coase, Ronald H. 1992. ‘The Institutional Structure of Production’. American Economic Review 82 (4): pp. 713–19. • Couch, Kenneth A., and Dana W. Placzek. 2010. ‘Earnings Losses of Displaced Workers Revisited’. American Economic Review 100 (1): pp. 572–89. • Ehrenreich, Barbara. 2011. Nickel and Dimed: On (Not) Getting By in America. New York, NY: St. Martin’s Press. • Hansmann, Henry. 2000. The Ownership of Enterprise. Cambridge, MA: Belknap Press. • Helper, Susan, Morris Kleiner, and Yingchun Wang. 2010. ‘Analyzing Compensation Methods in Manufacturing: Piece Rates, Time Rates, or Gain-Sharing?’. NBER Working Papers No. 16540, National Bureau of Economic Research, Inc. • Jacobson, Louis, Robert J. Lalonde, and Daniel G. Sullivan. 1993. ‘Earnings Losses of Displaced Workers’. The American Economic Review 83 (4): pp. 685–709. • Kletzer, Lori G. 1998. ‘Job Displacement’. Journal of Economic Perspectives 12 (1): pp. 115–36. • Kroszner, Randall S. and Louis Putterman (editors). 2009. The Economic Nature of the Firm: A Reader. Cambridge: Cambridge University Press. • Lazear, Edward P., Kathryn L. Shaw, and Christopher Stanton. 2016. ‘Making Do with Less: Working Harder during Recessions’. Journal of Labor Economics 34 (S1 Part 2): pp. 333-360. • Marx, Karl. (1848) 2010. The Communist Manifesto. Edited by Friedrich Engels. London: Arcturus Publishing. • Marx, Karl. 1906. Capital: A Critique of Political Economy. New York, NY: Random House. • Micklethwait, John and Adrian Wooldridge. 2003. The Company: A Short History of a Revolutionary Idea. New York, NY: Modern Library. • Mill, John Stuart. (1848) 1994. Principles of Political Economy. New York: Oxford University Press. • Mill, John Stuart. (1859) 2002. On Liberty. Mineola, NY: Dover Publications. • O’Reilly, Tim, and Eric S. Raymond. 2001. The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Sebastopol, CA: O’Reilly. • Pencavel, John. 2002. Worker Participation: Lessons from the Worker Co-ops of the Pacific Northwest. New York, NY: Russell Sage Foundation Publications. • Simon, Herbert A. 1951. ‘A Formal Theory of the Employment Relationship’. Econometrica 19 (3). • Simon, Herbert A. 1991. ‘Organizations and Markets’. Journal of Economic Perspectives 5 (2): pp. 25–44. • Toynbee, Polly. 2003. Hard Work: Life in Low-pay Britain. London: Bloomsbury Publishing. • Williamson, Oliver E. 1985. The Economic Institutions of Capitalism. New York, NY: Collier Macmillan. 1. Herbert A. Simon. 1991. ‘Organizations and Markets’. Journal of Economic Perspectives 5 (2): pp. 25–44. 2. Herbert A. Simon. 1951. ‘A Formal Theory of the Employment Relationship’. Econometrica 19 (3). 3. Karl Marx. (1848) 2010. The Communist Manifesto. Edited by Friedrich Engels. London: Arcturus Publishing. 4. Ronald H. Coase. 1937. ‘The Nature of the Firm’. Economica 4 (16): pp. 386–405. 5. Ronald H. Coase. 1937. ‘The Nature of the Firm’. Economica 4 (16): pp. 386–405. 6. D. Robertson. 1923. The Control of Industry. Hitchen: Nisbet. 7. Ronald H. Coase. 1992. ‘The Institutional Structure of Production’. American Economic Review 82 (4): pp. 713–719. 8. Barbara Ehrenreich. 2011. Nickel and Dimed: On (Not) Getting By in America. New York, NY: St. Martin’s Press. 9. Polly Toynbee. 2003. Hard Work: Life in Low-pay Britain. London: Bloomsbury Publishing. 10. Harry Braverman and Paul M. Sweezy. 1975. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. 2nd ed. New York, NY: Monthly Review Press. 11. Susan Helper, Morris Kleiner, and Yingchun Wang. 2010. ‘Analyzing Compensation Methods in Manufacturing: Piece Rates, Time Rates, or Gain-Sharing?’. NBER Working Papers No. 16540, National Bureau of Economic Research, Inc. 12. Lori G. Kletzer. 1998. ‘Job Displacement’. Journal of Economic Perspectives 12 (1): pp. 115–136. 13. Kenneth A. Couch and Dana W. Placzek. 2010. ‘Earnings Losses of Displaced Workers Revisited’. American Economic Review 100 (1): pp. 572–589. 14. Louis Jacobson, Robert J. Lalonde, and Daniel G. Sullivan. 1993. ‘Earnings Losses of Displaced Workers’. The American Economic Review 83 (4): pp. 685–709. 15. Edward P. Lazear, Kathryn L. Shaw, and Christopher Stanton. 2016. ‘Making Do with Less: Working Harder during Recessions’. Journal of Labor Economics 34 (S1 Part 2): pp. 333–360. 16. Truman F. Bewley. 1999. Why Wages Don’t Fall during a Recession. Cambridge, MA: Harvard University Press. 17. John Stuart Mill. (1859) 2002. On Liberty. Mineola, NY: Dover Publications. 18. John Stuart Mill. (1848) 1994. Principles of Political Economy. New York: Oxford University Press.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33050617575645447, "perplexity": 2498.1156390097854}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00252.warc.gz"}
http://www.computer.org/csdl/trans/tk/2012/01/ttk2012010015-abs.html
Subscribe Issue No.01 - January (2012 vol.24) pp: 15-29 Jeremy H. Wright , AT&T Labs - Research, Florham Park John Grothendieck , Raytheon BBN Technologies ABSTRACT Text streams are ubiquitous and contain a wealth of information, but are typically orders of magnitude too large in scale for comprehensive human inspection. There is a need for tools that can detect and group changes occurring within text streams and substreams, in order to find, structure, and summarize these changes for presentation to human analysts. This paper describes a procedure for efficiently finding step changes, trends, bursts, and cyclic changes affecting frequencies of words, or more general lexical items, within streams of documents which may be optionally labeled with metadata. The common phenomenon of over-dispersion is accommodated using mixture distributions. A streaming implementation is described which can process data from a continuous feed. Anomalies can be detected, grouped, and rendered visually for human comprehension. INDEX TERMS Statistical software, modeling structured, textual and multimedia data, text mining. CITATION Jeremy H. Wright, John Grothendieck, "CoCITe—Coordinating Changes in Text", IEEE Transactions on Knowledge & Data Engineering, vol.24, no. 1, pp. 15-29, January 2012, doi:10.1109/TKDE.2010.250 REFERENCES [1] A.E. Raftery and V.E. Akman, "Bayesian Analysis of a Poisson Process with a Change-Point," Biometrica, vol. 73, pp. 85-89, 1986. [2] J.D. Scargle, "Studies in Astronomy Time Series Analysis. V. Bayesian Blocks, a New Method to Analyze Structure in Photon Counting Data," The Astrophysical J., vol. 504, pp. 405-418, 1998. [3] M. Salmenkivi and H. Mannila, "Using Markov Chain Monte Carlo and Dynamic Programming for Event Sequence Data," J. Knowledge and Information Systems, vol. 7, no. 3, pp. 267-288, 2005. [4] Y. Lu and J. Garrido, "Doubly Periodic Non-Homogeneous Poisson Models for Hurricane Data," Statistical Methodology, vol. 2, pp. 17-35, 2005. [5] J. Allan, R. Papka, and V. Lavrenko, "On-Line New Event Detection and Tracking," Proc. 21st ACM-SIGIR Int'l Conf. Research and Development in Information Retrieval (SIGIR '98), pp. 37-45, 1998. [6] J. Allan, Topic Detection and Tracking. Springer, 2002. [7] J. Kleinberg, "Bursty and Hierarchical Structure in Streams," Proc. Eighth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '02), pp. 91-101, 2002. [8] J. Kleinberg, "Temporal Dynamics of On-Line Information Streams," Data Stream Management: Processing High-Speed Data Streams, M. Garofalakis, J. Gehrke, R. Rastogi, eds., Springer, 2006. [9] M. Vlachos, C. Meek, Z. Vagena, and D. Gunopulos, "Identifying Similarities, Periodicities and Bursts for Online Search Queries," Proc. 23th ACM SIGMOD Int'l Conf. Management of Data (SIGMOD '04), pp. 131-142, 2004. [10] X. Wang, C. Zhai, X. Hu, and R. Sproat, "Mining Correlated Bursty Topic Patterns from Coordinated Text Streams," Proc. 13th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD '07), pp. 784-793, 2007. [11] M. Vlachos, K.-L. Wu, S.-K. Chen, and P.S. Yu, "Correlating Burst Events on Streaming Stock Market Data," Data Mining and Knowledge Discovery, vol. 16, pp. 109-133, 2008. [12] S. Papadimitriou, J. Sun, and P.S. Yu, "Local Correlation Tracking in Time Series," Proc. IEEE Sixth Int'l Conf. Data Mining, pp. 456-465, 2006. [13] Q. He, K. Chang, and E.-P. Lim, "Analyzing Feature Trajectories for Event Detection," Proc. 30th Ann. Int'l ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR '07), pp. 207-214, 2007. [14] M. Dubinko, R. Kumar, J. Magnani, J. Novak, P. Raghavan, and A. Tomkins, "Visualizing Tags over Time," ACM Trans. Web, vol. 1, no. 2, 2007. [15] L. Geng and H.J. Hamilton, "Interestingness Measures for Data Mining: A Survey," ACM Computing Surveys, vol. 38, no. 3, 2006. [16] M.D. Robinson and G.K. Smyth, "Small Sample Estimation of Negative Binomial Dispersion, with Applications to SAGE Data," Biostatistics, vol. 9, no. 2, pp. 321-332, 2008. [17] Y. Young-Xu and K.A. Chan, "Pooling Overdispersed Binomial Data to Estimate Event Rate," BMC Medical Research Methodology, vol. 8, no. 58, 2008. [18] G.A.F. Seber, Linear Regression Analysis. Wiley, 1977. [19] G.A. Barnard, "Significance Tests for $2 \times 2$ Tables," Biometrika, vol. 34, pp. 123-138, 1947. [20] C. Dean and J.F. Lawless, "Tests for Detecting Overdispersion in Poisson Regression Models," J. Am. Statistical Assoc., vol. 84, pp. 467-472, 1989. [21] R.E. Tarone, "Testing the Goodness of Fit of the Binomial Distribution," Biometrika, vol. 66, no. 3, pp. 585-590, 1979. [22] S. van Dongen, "MCL—A Cluster Algorithm for Graphs," http://micans.orgmcl, 2000. [23] S. van Dongen, "Graph Clustering by Flow Simulation," PhD thesis, Univ. of Utrecht, 2000. [24] K. Scarfone and P. Mell, "Guide to Intrusion Detection and Prevention Systems (IDPS)," NIST Special Publication 800-94, http://csrc.ncsl.nist.gov/publications/nistpubs/ 800-94 SP800-94.pdf, 2007. [25] K. Julisch and M. Dacier, "Mining Intrusion Detection Alarms for Actionable Knowledge," Proc. Eighth ACM Int'l Conf. Knowledge Discovery and Data Mining (KDD '02), 2002. [26] J. Viinikka and H. Debar, "Monitoring IDS Background Noise Using EWMA Control Charts and Alert Information," Proc. Seventh Int'l Symp. Recent Advances in Intrusion Detection (RAID), pp. 166-187, 2004. [27] Linguistic Data Consortium, The AQUAINT Corpus of English News Text, Catalog no. LDC2002T31, http://www.ldc.upenn.edu/ Catalog/docsLDC2002T31 , 2002.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7102900743484497, "perplexity": 14997.297350962117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120974.20/warc/CC-MAIN-20140914011200-00284-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/0406328/
###### Abstract We investigate the impact of the 766.3 Ty KamLAND spectrum data on the determination of the solar neutrino oscillation parameters. We show that the observed spectrum distortion in the KamLAND experiment firmly establishes  to lie in the low-LMA solution region. The high-LMA solution is excluded at more than 4 by the global solar neutrino and KamLAND spectrum data. The maximal solar neutrino mixing is ruled out at level. The allowed region in the plane is found to be remarkably stable with respect to leaving out the data from one of the solar neutrino experiments from the global analysis. We perform a three flavor neutrino oscillation analysis of the global solar neutrino and KamLAND spectrum data as well. The upper limit on is found to be . We derive predictions for the CC to NC event rate ratio and day-night (D-N) asymmetry in the CC event rate, measured in the SNO experiment, and for the suppression of the event rate in the BOREXINO and LowNu experiments. Prospective high precision measurements of the solar neutrino oscillation parameters are also discussed. SISSA 47/2004/EP hep-ph/0406328 Update of the Solar Neutrino Oscillation Analysis with the 766 Ty KamLAND Spectrum Abhijit Bandyopadhyay, Sandhya Choubey, Srubabati Goswami, S.T. Petcov, D.P. Roy Saha Institute of Nuclear Physics, 1/AF, Bidhannagar, Calcutta 700 064, India, INFN, Sezione di Trieste, Trieste, Italy, Scuola Internazionale Superiore di Studi Avanzati, I-34014, Trieste, Italy, Harish-Chandra Research Institute, Chhatnag Road, Jhusi, Allahabad 211 019, India, Institute of Nuclear Research and Nuclear Energy, Bulgarian Academi of Science, Sofia, Bulgaria, The Abdus Salam International Centre for Theoretical Physics, I-34100, Trieste, Italy, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ## 1 Introduction The last four years will most likely be described in the future as the golden years of solar neutrino physics. The pioneering results of the Homestake experiment [1], which first observed neutrinos emitted by the Sun and discovered the existence of a solar neutrino deficit 111Let us recall that the Cl-Ar method of neutrino detection, used in the Homestake experiment, was first proposed in [2]., and of the Kamiokande [3], SAGE and GALLEX/GNO [4] experiments, which confirmed and extended the Homestake results on the solar neutrino deficit, were reinforced during the last four years by a series of precision measurements by the Super-Kamiokande(SK) [5], SNO [6],[7],[8] and KamLAND [9] experiments. With the recent publication of the KamLAND 766 Ty spectrum data [10] and under the plausible assumption of CPT-symmetry, for the first time a unique solution of the solar neutrino problem in terms of neutrino oscillations [11, 12] can be unambiguously identified. Let us summarize the main steps of the progress in our understanding of the solution of the solar neutrino problem, made in this past four years. • The first charged current (CC) data from SNO [6] together with absence of distortions of the spectrum of the final state in the elastic scattering reaction due to solar neutrinos, measured with a high precision in the SK experiment [5], excluded the vacuum oscillation (VO) and the small mixing angle (SMA) MSW [12] solutions in favour of the large mixing angle (LMA) MSW and LOW solutions [13, 14]. • The first neutral current (NC) solar neutrino data from SNO [7], obtained by observing the solar neutrino capture on (Phase I of the experiment), provided a direct estimate of the Boron neutrino flux normalisation . It confirmed the Standard Solar Model (SSM) prediction for this quantity (the uncertainty in the experimentally determined was smaller than that in the SSM prediction). This implied that the CC rates of Cl, SK and SNO are indeed smaller than 0.5. This strongly favoured the LMA MSW solution over the LOW solution and a non-maximal solar neutrino mixing angle [15]. • The convincing evidence in favour of the LMA solution was obtained in the KamLAND  experiment with reactor [9], which published first results, based on statistics of 162 Ty, in December of 2002. Under the plausible assumption of CPT invariance, the suppression of the reactor flux observed in the KamLAND experiment firmly established the LMA solution, ruling out the LOW solution at about 5 level. Moreover, the 162 Ty KamLAND data on the spectrum distortion, combined with the global solar neutrino data, implied that the LMA solution was confined to two sub-regions : low-LMA (or LMA-I), centered around eV, and high-LMA (or LMA-II) with centered around eV. It was found that in both cases . The best-fit was in the low-LMA region, while the high-LMA region was allowed only at 99% C.L. [16, 17] • Finally the NC data from the salt phase of the SNO experiment [8], provided a more precise measurement of . The inclusion of these data in the global solar neutrino oscillation analysis reduced further the allowed region of solar neutrino oscillation parameters. Now the high-LMA region was allowed only at 2.65 level, while the maximal solar neutrino mixing was excluded at about 5 [18, 19]. One of the most important issues after these developments was the definitive resolution of the high-LMA and low-LMA solution ambiguity. This was expected to lead to a precise determination of the neutrino mass squared difference driving the solar neutrino oscillations. For this reason a study of KamLAND 410 Ty and 1000 Ty simulated spectrum data, corresponding to different points in the parameter space, spanning the low-LMA and high-LMA solution regions, was made in [18]. This study showed, in particular, that if the true KamLAND 1000 Ty spectrum data corresponded to a point in the low-LMA region, the high-LMA solution would be ruled out at 3 level by the combined solar and KamLAND data, while the low-LMA solution region will be considerably reduced. The best-fit point obtained from the analysis of the energy spectrum of the recently released 766.3 Ty data from KamLAND indeed lies inside the low-LMA region [10]. And the combined global solar and KamLAND data indeed excludes the high-LMA solution at 3 level, in agreement with the above expectation. In this article we investigate the impact of the 766.3 Ty KamLAND spectrum data on the determination of the solar neutrino oscillation parameters. We perform first a two-neutrino oscillation analysis of the global solar neutrino and the latest KamLAND spectrum data. This permits us to quantify the improvements in the precision of determination of the solar neutrino oscillation parameters which the new KamLAND data imply and, in particular, to assess the status of the high-LMA solution. We check the stability of the allowed region of values of the solar neutrino oscillation parameters thus derived with respect to leaving out from the analysis the data from one of the solar neutrino experiments. This serves also as a check of the consistency between the data from the different experiments and gives some idea about the level of redundancy of the global solar neutrino data set. We next extend the analysis to the case of three neutrino oscillations. We derive, in particular, a new upper limit on the CHOOZ mixing angle , and study the dependence of the allowed values of the parameters and which drive the solar neutrino oscillations, on the value of . We give predictions for the CC to NC event rate ratio and day-night (D-N) asymmetry in the CC event rate, measured in the SNO experiment, for the suppression of the event rate in the BOREXINO and LowNu experiments, designed to measure the and solar neutrino fluxes. Finally, we discuss also how the precision of determination can improve with the increasing of the precision of the future SNO data, as well as, by performing a reactor oscillation experiment with a baseline km. ## 2 Two Flavour-Neutrino Oscillation Analysis We first present the results of a standard two-flavor neutrino oscillation analysis. We use the 13 bin KamLAND spectrum data and define a assuming a Poisson distribution as χ2klspec=∑i⎡⎣2(XnStheoryKL,i−SexptKL,i)+2SexptKL,iln(SexptKL,iXnStheoryKL,i)⎤⎦+(Xn−1)2σ2sys, (1) where is allowed to vary freely and is the systematic uncertainty. 222Note that several theoretical and experimental systematic errors like those due to energy scale, reactor spectrum etc. are energy dependent. However, these details being inacessibile to us, we have used the same total systematic error for all the bins. Detailed information by the KamLAND collaboration on the errors and their correlations in each bin will make our analysis more accurate. As KamLAND statistics is increased and systematics start to dominate more detailed information from the KamLAND collaboration will be an important requirement. We include the revised resolution width, fuel composition, detector fiducial mass and efficiencies from [10]. The other details of our analysis can be found in [16, 20]. Some of the reactors, particularly the Kashiwazaki-Kariwa and Fukushima I and II reactor complexes, were partially/totally shut-down during some of the period of data taking in KamLAND. We have approximately taken into account this change in the reactor flux due to the reactor shut-down using the plots showing the time variations of the number of fissions in a given reactor and hence the expected reactor flux in KamLAND [21]. We have also used the information on the reactor operation schedules available on the web [22]. In the latest version of [10], the KamLAND collaboration have identified a new source of background in the their analysis, coming from reaction induced by the decay of the radon daughter in the liquid scintillator. This increases the total background in their signal to above MeV. We include in our analysis, this new background and its associated uncertainty. We show by the unshaded contours in Fig. 1 the allowed regions obtained using the 766.3 Ty KamLAND spectrum data. The best-fit according to our analysis is at 333 With the KamLAND results in the earlier versions of [10] which did not include the new background, the best-fit values were eV, and . Thus, the inclusion of the new background in our analysis is seen to have changed the best-fit oscillation parameters as well as improved the goodness of fit (g.o.f). This is in agreement with what the KamLAND collaboration has obtained. Δm221=8.2×10−5 eV2,   sin2θ12=0.26,   χ2min/d.o.f.=15.24/10 (2) The best-fit value of we find agrees reasonably well with that obtained by the KamLAND  collaboration [10], while our best fit value of is somewhat lower than that found in [10] because of differences in the fitting procedure and the relative insensitivity of the KamLAND data to this parameter. The regions at eV which were allowed by the KamLAND 162 Ty spectrum data [9], are now severely disfavored due to increased precision on the observed spectral distortion and only a very tiny area is allowed at the 3 level. Superposed on the same figure, we show by the shaded areas the allowed regions obtained using the combined solar neutrino + 162Ty KamLAND data. As it follows from this figure, the best-fit point of the new KamLAND spectrum data lies inside the allowed low-LMA region, obtained in the solar neutrino + 162 Ty KamLAND spectrum data analysis. In Fig. 2 we show the allowed region obtained from the combined analysis of the global solar neutrino data and the 766.3 Ty KamLAND spectrum data. The dashed line in the figure indicates the region allowed at 90% C.L. by the global solar neutrino data alone. The star (dot) marks the best-fit point of the solar neutrino + 766.3 Ty KamLAND (solar neutrino) data. We have used in this analysis the solar neutrino data on the total event rates from the radiochemical experiments Cl [1] and Ga (Gallex, SAGE and GNO combined) [4], the 1496 day 44 bin Zenith angle spectrum data from SK [5], the combined CC, NC and Electron Scattering (ES) 34 bin energy spectrum data from the phase I ( pure phase) of SNO [7] and the data on CC, NC and ES total observed rates from the phase II (salt phase) of the SNO experiment [8]. For the combined analysis of solar and KamLAND data we define the global as χ2global=χ2⊙+χ2klspec (3) where χ2⊙=N∑i,j=1(Rexpti−Rtheoryi)(σ2ij)−1(Rexptj−Rtheoryj) (4) where are the solar data points, is the number of data points and is the inverse of the covariance matrix, containing the squares of the correlated and uncorrelated experimental and theoretical errors. The flux normalisation factor is left to vary freely in the analysis. For the other solar neutrino fluxes (, , , , ), the predictions and estimated uncertainties of the most recent standard solar model (SSM) [23] (BP04) have been utilized. For further details of our solar neutrino data analysis we refer the reader to our earlier papers [13, 15, 18]. We find that with the inclusion of the latest KamLAND spectrum data, the high-LMA region is disfavored at more than 99.9% C.L. in a 2 parameter fit. Thus, the high-LMA solution is excluded at more than 3. This establishes the low-LMA solution as unique solution of the solar neutrino problem. It also confirms our prediction [18] that, if the best-fit of the KamLAND spectrum data corresponds to a point in the low-LMA solution region, there will be no high-LMA region allowed at 3 level. The best-fit point we get from the combined solar neutrino and KamLAND data analysis is, Δm221=8.0×10−5 eV2,  sin2θ12=0.28,  fB=0.88,  χ2min/d.o.f.=85.42/92 (5) in good agreement with that obtained in [10]. Note that the best-fit point from the global solar neutrino data analysis, Δm221~{}=6.07×10−5 eV2,   sin2θ12=0.29,   fB=0.90,   χ2min/d.o.f.=69.06/80 (6) lies outside the 3 range, allowed after including the new KamLAND data. However, the function for the solar neutrino data changes weakly when   varies in an interval of values which is centered on the best fit value in eq. (6) and includes the best fit value of the global data set, eq. (5) (see Fig. 3). The best-fit value of in the global fit is controlled by the KamLAND data, whereas the best-fit value of is controlled by the global solar neutrino data. The allowed region is seen to have narrowed down considerably making it possible to plot it on a linear scale. In Table 1 we present the 3 allowed ranges of and , obtained using different data sets. We also show the uncertainty in the value of the parameters through a quantity “spread” which we define as spread=prmmax−prmminprmmax+prmmin×100, (7) where denotes the parameter  or , and and are the maximal and minimal values of the chosen parameter allowed at a given C.L. Table 1 illustrates the remarkable sensitivity of the KamLAND experiment to , which results in stringent constraints on the allowed values of . However, the KamLAND experiment does not constrain the allowed range of much better than the solar neutrino experiments. So far we have given the allowed regions of  and , obtained from the two parameter fits of the data. It is instructive to see the bounds on the oscillation parameters using one parameter plots of vs. and vs. . In Fig. 3 we show the dependence of on (left hand panel) and on   (right hand panel) after marginalising over the remaining free parameters. In this analysis only the solar neutrino data, and the solar neutrino + 766.3 Ty KamLAND spectrum data have been used. We find that the allowed range of values becomes much narrower compared to that obtained using only the global solar neutrino data. The inclusion of the recent KamLAND results makes the for the high-LMA region even larger, excluding it at more than 4 for a one parameter fit. From this figure we also see that the best-fit value of  , obtained from the global solar neutrino data, has a , and hence is disfavored at . The inclusion of the new KamLAND spectrum data disfavors maximal solar neutrino mixing to a greater degree. The value at is a little above 40, thereby excluding the maximal mixing at more than 6 for a one parameter fit. Figure 3, showing the dependence of on , corroborates our results presented in Table 1, namely, that the allowed range of does not change considerably up to the level with the inclusion of the new KamLAND results. The reason for this can be traced to the fact that for the values of and allowed by the combined solar neutrino and KamLAND data, the Earth matter effects are negligible at the baselines relevant for the KamLAND experiment and the relevant survival probability reads: PKLee≈1−sin22θ12sin2(πLλ), (8) where denotes the oscillation length, λ=2.47 eV2Δm2EMeV m. (9) On the other hand, if we neglect the Earth matter effects in the solar neutrino transitions, which are rather small, the survival probability relevant for the interpretation of the data of the SNO and SK solar neutrino experiments, is given by the adiabatic MSW prediction [12] Psnoee≈sin2θ12. (10) Since is a less sensitive function of compared to , the survival probability relevant for the interpretation of the KamLAND data is less sensitive to than that measured at SNO. Moreover, the average energy measured at KamLAND ( MeV), and the average source-detector distance for KamLAND ( km), correspond to for the best-fit . At , the survival probability has a maximum (SPMAX). This means that the coefficient of the term in is relatively small, preventing the KamLAND experiment to reach high precision in the determination of . Evidently, the sensitivity to can be improved by reducing the baseline length to , corresponding to a minimum of the survival probability (SPMIN) [24]. We shall come back to this point later. ## 3 Consistency Check between Different Experiments In this Section we check the consistency between the allowed regions obtained using data from different experiments. In Fig. 4 we compare the KamLAND spectrum data with the predictions for the spectrum obtained for values of and corresponding to the i) solar neutrino data best-fit point, ii) KamLAND spectrum data best-fit point, and iii) solar neutrino +KamLAND spectrum data best-fit point. We also show the unoscillated spectrum obtained by us. This agrees fairly well with that given by the KamLAND collaboration in [10] indicating that we have correctly implemented the reactor power and operation schedules from the availbale sources. This figure clearly illustrates the sensitivity of the predicted spectrum to , and the deviations of the observed spectrum from that predicted at the solar neutrino data best-fit point. In the absence of the KamLAND results, it was necessary to compare the “low” and “high” energy CC solar neutrino data to determine , and to compare the CC and NC data to determine . With determined using the new KamLAND results, one can make a consistency check by dispensing with the data from anyone of the solar neutrino experiments. In Fig. 5 we present the allowed regions obtained by taking out the data from one solar neutrino experiment from the global data set. As before we let the flux normalisation vary freely in the analysis. The figure shows that the allowed region is robust and does not change considerably when the data from one experiment is left out of the analysis. Taking out the SNO data will lead to smaller values of , and correspondingly larger values of being allowed. Using the 162 Ty KamLAND data and taking out the SNO results from the analysis, made the maximal mixing solution allowed [25]. With the 766.3 Ty KamLAND data included in the analysis, the maximal mixing is ruled out at 3 even leaving out the SNO data from the data set used in the analysis. This is a consequence of the increased precision of the KamLAND data which disfavours the maximal mixing solution. In Table 2 we compare the observed event rates in the different solar neutrino experiments with those predicted for the best fit values of   and , obtained in the analysis of the global solar neutrino data and global solar neutrino + KamLAND spectrum data. Note that the SNO NC event rate relative to the SSM prediction of BP04 is , while its earlier central value, with respect to the SSM prediction of BP00, was slightly above 1. This drop simply reflects the increase in the central value of the flux from cm to cm in the latest SSM of BP04 [23]. There is a corresponding drop in the other experimental rates shown in Table 2. However, this renormalisation has no effect on our results since we have not used SSM prediction for the normalisation of the flux. Instead we have left as a free parameter to be determined by the solar neutrino data. This parameter is primarily determined by the NC event rate measured in SNO. We see from Table 2 that all the measured event rates, except that observed in the Cl experiment, are in very good agreement with the predicted ones. There is very little difference between the predictions, corresponding to the solar neutrino data and solar neutrino + KamLAND spectrum data best-fit points. This shows the insensitivity of the fit of the solar neutrino data to small variations in , in contrast to the fit of the KamLAND spectrum data. Note that the obtained Cl rate is by 2 lower than the global best-fit prediction. This is a statistically small but well known deviation which cannot be explained by the LMA solution [26]. If such a deviation is confirmed by future intermediate energy solar neutrino experiments like Borexino, it will call for some additional subdominant mechanism of solar neutrino transitions. ## 4 Three Flavour Neutrino Mixing Analysis In this Section we present results obtained from the analysis of the global data on solar and reactor neutrinos within a three-flavor neutrino mixing framework. In this case and . The best-fit value of , obtained in the latest two-neutrino mixing analysis of the Super-Kamiokande atmospheric neutrino data on the Zenith-angle distribution of the like events is 2.1 eV [27]. Thus, the two-neutrino mixing analyses of the solar and atmospheric neutrino data indicate that . Under this approximation the effect of the third heaviest neutrino in the relevant solar neutrino and reactor anti-neutrino survival probabilities is due mainly to the mixing angle . The relevant and survival probabilities in the three-neutrino mixing cases of interest are given by the following expression: P3νee≅cos4θ13P2νee+sin4θ13, (11) where is the or survival probability in the case of two-neutrino mixing (see, e.g., [29]). For solar neutrinos, is the two-neutrino mixing survival probability [30] with the solar electron number density replaced by . In the case of KamLAND experiment one has , where is given by eq. (8) in which . Strong constraints on the value of have been obtained in the CHOOZ and Palo Verde reactor antineutrino experiments [28]. We include the CHOOZ results in our three-flavour neutrino mixing analysis (see also [31]). In the limit of , the probability relevant for the interpretation of the CHOOZ data is given by P3νeeCHOOZ≅1−sin22θ13sin2(Δm231L/4E). (12) We note that depends on , while and depend on . We allow to vary freely within the range ( eV, obtained using the one parameter vs fit of the SK atmospheric neutrino Zenith angle data, presented by the SK Collaboration at the Neutrino 2004 International Conference [27] 444The allowed range of depends crucially on the allowed range of the atmospheric mass squared difference [25]., and perform a combined three-neutrino oscillation analysis of the global solar neutrino and reactor anti-neutrino data, including both the KamLAND and CHOOZ results. The best-fit values of the parameters obtained from the three-flavor neutrino mixing analysis are: Δm221=8.0×10−5 eV2,  sin2θ12=0.28,  sin2θ13=0.004,  fB=0.88,  χ2min/d.o.f.=91.68/105 (13) In Fig. 6 we present the as a function of , for allowed to vary within its 3 allowed range [27], eV, and the other parameters allowed to vary freely. The bounds on , obtained from CHOOZ data analysis, can be directly read from the figure for a one parameter fit as . The bound derived from the combined analysis of the solar neutrino, CHOOZ and KamLAND data is . In Fig. 7 we show the allowed regions in the plane for four fixed values of . We note that although the presence of a small non-zero can improve the fit in the regions of the parameter space with higher values of   [18], i.e., in the high-LMA zone, the new KamLAND data are able to exclude the high-LMA region at more than 3 even in the presence of a third generation in the mixing, indicating the robustness of the low-LMA solution. ## 5 Future Projections The recent KamLAND data combined with the solar neutrino data unambiguously determine the low-LMA solution as unique solution of the solar neutrino problem. It also enables us to determine   with a relatively high precision: 10% at 90% C.L. The high-LMA solution is disfavored at more than 3. The uncertainty in the value of is expected to diminish further as KamLAND collects more data. However, as we have stressed before, the KamLAND experiment does not appreciably reduce the error on the value of [24]. In the near future, the SNO collaboration is expected to publish data on the CC () day/night spectrum, observed during the salt phase of the experiment. The recent KamLAND results allow us to make relatively precise predictions for the the day-night asymmetry in the SNO experiment: ADN=2N−DN+D. (14) In the right hand panel of Fig. 8 we show the lines of constant values in the plane for the SNO experiment (see also, e.g., [32, 33]). The predicted in SNO for the current best-fit values of the parameters and the corresponding range, are given by: ADN(SNO)=0.034,  3σ range: 0.027−0.043. (15) The published SNO result on the D/N asymmetry is . Thus, the error has to be reduced to in order to observe at 3 level. In the left hand panel of the same figure we also plot the iso-CC/NC contours for SNO. The measured value of the CC to NC ratio from the salt phase of SNO experiment is, RsaltCC/NC=0.305±0.033. (16) The phase III of SNO will collect neutral current data using Helium counters [34]. This would give a totally uncorrelated information on the CC and NC event rates observed in SNO and a reduced error for the NC event rate. The projected total error for the observed NC event rate for this phase is 6% [34]. For the CC event rate we assume that the statistical error during the phase III would be approximately the same as in each of the earlier two phases, while the systematic error is taken to be 4.5%, i.e., slightly smaller than the 5% reported in phases I and II of SNO. Thus, we assume that the total error in CC event rate measurement from all the three phases combined will be about 5%. If the central value of the CC and NC event rate ratio would remain the same as observed in the salt phase, the CC/NC ratio expected to be measured in the phase III of the SNO experiment would be RHeCC/NC=0.305±0.024. (17) Since for the CC/NC ratio in SNO is mainly related to the solar neutrino mixing angle (see, e.g., ref. [33], eq. (10) and the left hand panel in Fig. 8), the reduction in the error is expected to result in an improvement in the precision of determination. We have made a projected analysis of the global solar neutrino data, including the upgraded CC and NC errors expected from the phase III of the SNO experiment. The range of allowed values of could be reduced to about at 99% C.L. (99.73% C.L.), corresponding to a spread of [35]. There has been a recent proposal of adding 0.1% gadolinium to the water in the Super-Kamiokande detector to improve the detector sensitivity to neutrons [36]. This would result in a remarkable increase in the detector sensitivity to low energy , transforming SK into a huge reactor antineutrino detector (SK-Gd), with an event rate that is about 43 times larger than that observed in KamLAND [36, 35]. After 5 years of data taking, the SK-Gd experiment can measure with and with uncertainty at 99% C.L. [35]. As discussed earlier, a very precise measurement of can be achieved in a reactor experiment with a baseline corresponding to an SPMIN of the survival probability [24]. The condition for SPMIN is m. For the low-LMA solution region and the average energy of the observed in the KamLAND experiment, this corresponds to a distance of approximately (50 - 70) km [24]. For an experiment with a 70 km baseline and 24.3 GW (18.6 GW) reactor power, corresponding to the Kashiwazaki (Daya Bay) complex in Japan (China), can be determined with a error at 99% C.L. with a 3 kTy (4 kTy) statistics [24]. The forthcoming solar neutrino experiments are Borexino [37] and KamLAND-, which will provide an accurate measurement of the neutrino flux, and the Low energy solar Neutrino (LowNu) experiments [39, 38], which are designed to measure the flux of solar neutrinos. The potential of Borexino and any generic LowNu experiment [40] in constraining the solar neutrino oscillation parameters have been studied recently in [24, 41]. For the allowed regions obtained in this paper, we find the predicted rates for Borexino and LowNu experiments to be RBe = 0.67,  (3σ range: 0.62−0.72) (18) Rpp = 0.71,  (3σ range: 0.67−0.76). (19) ## 6 Conclusions We have investigated the implications of including the recent KamLAND spectrum data in global solar neutrino oscillation analysis. The observed spectral distortion in the KamLAND experiment firmly establishes  to lie in the low-LMA solution region. The high-LMA solution is excluded at more than 4 by the global solar neutrino and KamLAND spectrum data. The maximal solar neutrino mixing is ruled out at level. We have found that the allowed region in the plane remains remarkably stable even when we leave out the data from one of the solar neutrino experiments from the global fit. Likewise, there is practically no increase in the allowed region when one goes from two to three flavor neutrino oscillation analysis of the global solar neutrino and KamLAND spectrum data. The upper limit on was found to be . We have derived predictions for the CC to NC event rate ratio and day-night (D-N) asymmetry in the CC event rate, measured in the SNO experiment, and for the suppression of the event rate in the BOREXINO and LowNu experiments, designed to measure the and solar neutrino fluxes. With the value of   determined more precisely using the current solar neutrino and KamLAND data, the predicted range of possible values of the day-night asymmetry in the CC event rate at SNO narrows down to (0.025 - 0.041) at 99.73% C.L. Remarkably high precision in the measurement of   can be achieved with the Super-Kamiokande detector loaded with 1% gadolinium: this would transform Super-Kamiokande into a huge reactor detector (SK-Gd) with an event rate 43 times larger than that observed in the KamLAND experiment. Finally, we have discussed how the precision of determination can improve with the increasing of the precision of the future SNO data, by the SK-Gd reactor oscillation experiment, as well as, by performing a reactor oscillation experiment with a baseline of km. With the publication of the latest KamLAND data the neutrino oscillation origin of the observed solar neutrino deficit is firmly established. The future high precision measurements of the solar neutrino oscillation parameters will be of fundamental importance for understanding the true origin of the flavour neutrino mixing. S.G. and D.P.R. would like to thank respectively SISSA and The Abdus Salam International Centre for Theoretical Physics for hospitality. This work was supported by the Italian INFN under the program “Fisica Astroparticellare” (S.T.P.). ## References Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source! For everything else, email us at [email protected].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431885480880737, "perplexity": 975.1267473673274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00390.warc.gz"}
https://www.khanacademy.org/science/physics/centripetal-force-and-gravitation/centripetal-forces/v/bowling-ball-in-vertical-loop
0 energy points Studying for a test? Prepare with these 3 lessons on Centripetal force and gravitation. See 3 lessons # Bowling ball in vertical loop Video transcript - [Narrator] Imagine that in an effort to make bowling more exciting, bowling alleys put a big loop-the-loop in the middle of the lane, so you had to bowl the ball really fast to get the ball up and around the loop and then only afterward, it would go hit the bowling pins kinda like mini golf bowling or something like that. Well if you were gonna build this, you'd have to know at the top of the loop, this structure's gonna have to withstand a certain minimum amount of force. You might wanna know how strong do you have to make this. You can't have this thing breaking because it can't withstand the force of the bowling ball. So let's ask ourselves that question. How much force is this loop structure gonna have to be able to exert while this bowling ball is going around in a circle and let's pick this point at the top to analyze. We'll put some numbers in here. Let's say the ball was going eight meters per second at the top of the loop. That's pretty darn fast so someone really hurled this thing through here. Now let's say the loop has a radius of two meters and the bowling ball has a mass of four kilograms, which is around eight or nine pounds. Now that we have these numbers, we can ask the question: How much normal force is there gonna be between the loop and the ball? So in other words, what is the size of that normal force, the force between the two surfaces? This is what we'd have to know in order to figure out if our structure is strong enough to contain this bowling ball as it goes around in a circle. And it's also a classic centripetal force problem, so let's do this. What do we do first? We should always draw a force diagram. If we're looking for a force, you draw a force diagram. So what are the forces on this ball? You're gonna have a force of gravity downward, and the magnitude of the force of gravity is always given by M times G, where G represents the magnitude of the acceleration due to gravity. And we're gonna have a normal force as well. Now which way does this normal force point? A common misconception, people wanna say that that normal force points up because in a lot of other situations, the normal force points up. If you're just standing on the ground over here, the normal force on you is upward because it keeps you from falling through the ground, but that's not what this loop structure's doing up here. The loop structure isn't keeping you up. The loop structure's keeping you from flying out of the loop and that means this normal force is gonna have to point downward. So this is weird for a lot of people to think about, but because the surface is above this ball, the surface pushes down. Surfaces can only push. If the surface is below you, the surface has to push up. If the surface was to the side of you, the surface would have to push right. And if the surface was to the right of you, the surface would have to push left. Normal forces in other words, always push. So the force on the ball from the track is gonna be downward but vice versa. The force on the track from the ball is gonna be upward. So if this ball were going a little too fast and this were made out of wood, you might see this thing splinter because there's too much force pushing on the track this way. But if we're analyzing the ball, the force on the ball from the track is downward. And after you draw a force diagram, the next step is usually, if you wanna find a force, to use Newton's Second Law. And to keep the calculation simple, we typically use Newton's Second Law for a single dimension at at time, i.e. vertical, horizontal, centripetal. And that's what we're gonna use in this case because the normal force is pointing toward the center of the circular path and the normal force is the force we wanna find, we're gonna use Newton's Second Law for the centripetal direction and remember centripetal is just a fancy word for pointing toward the center of the circle. So, let's do it. Let's write down that the centripetal acceleration should equal the net centripetal force divided by the mass that's going in the circle. So if we choose this, we know that the centripetal acceleration can always be re-written as the speed squared divided by the radius of the circular path that the object is taking, and this should equal the net centripetal force divided by the mass of the object that's going in the circle and you gotta remember how we deal with signs here because we put a positive sign over here because we have a positive sign for our centripetal acceleration and our centripetal acceleration points toward the center of the circle always. Then in toward the center of the circle is going to be our positive direction, and that means for these forces, we're gonna plug in forces toward the center of the circle as positive. So let's do that. This is the part where most of the problem is happening. You gotta be careful here. I'm just gonna plug in. What are the centripetal forces? To figure that out, we just look at our force diagram. What forces do we have in our diagram. We've got the normal force and the force of gravity. Let's start with gravity. Is the gravitational force going to be a centripetal force. First of all, that's the question you have to ask. Does it even get included in here at all? And to figure that out you ask: Does it point centripetally? I.e. does it point toward the center of the circle? And it does so we're gonna include the force of gravity moreover because it points toward the center of the circle as opposed to radially away from the center of the circle. We're gonna include this as a positive centripetal force. Similarly, for the normal force, it also points toward the center of the circle, so we include it in this calculation and it as well will be a positive centripetal force. And now we can solve for the normal force. If I solve algebraically, I can multiply both sides by the mass and then I'd subtract MG from both sides. And that would give me the mass times V squared over R minus the magnitude of the force of gravity, which if we plug in numbers, gives us four kilograms times eight meters per second squared, you can't forget the square, divided by a two meter radius minus the magnitude of the force of gravity which is four kilograms times G which if you multiply that out gives you 88.8 newtons. This is how much downward force is exerted on the ball from the track but from Newton's Third Law, we know that that is also how much force the ball exerts upward on the track. So whatever you make this loop out of, it better be able to withstand 88.8 newtons if people are gonna be rolling this ball around the loop with eight meters per second. Now let me ask you this. What if the ball makes it over to here, right? So the ball rolls around and now it's over at this point. Now how much normal force is there at this point? Is it gonna be greater than, less than, or equal to 88.8 newtons. Well to figure it out, we should draw a force diagram. So there's gonna be a force of gravity. Again, it's gonna point straight down, and again, it's gonna be equal to at least the magnitude of it will be equal to the mass times the magnitude of acceleration due to gravity. And then we also have a normal force, but this time, the normal force does not push down. Remember, surfaces push outward and if this surface is to the left of the ball, the surface pushes to the right. This time our normal force points to the right. And let's assume this a well oiled track so there's really no friction to worry about. In that case, these would again be the only two forces. So what about the answer to our question. Will this normal force now be bigger, less than, or equal to what the normal force was at the top. Well I'm gonna argue it's gotta be bigger, and I'm gonna argue it's gonna have to be much bigger because when you plug in over here, into the centripetal forces, you only plug in forces that point radially. That is to say centripetally, either into the circle, which would be positive, or radially out of the circle, which would be negative. If they neither point into nor out of the circle, you don't include them in this calculation at all because they aren't pointing in the direction of the centripetal acceleration. In other words, they're not causing the centripetal acceleration. So for this case over here, gravity is no longer a centripetal force because the force of gravity no longer points toward the center of the circle. This force of gravity is tangential to the circle. It's neither pointing into nor out of, which means it doesn't factor into the centripetal motion at all. It merely tries to speed the ball up at this point. It does not change the ball's direction, which means it doesn't contribute to making this ball go in a circle, so we don't include it in this calculation. So when we solved for the normal force, we'd multiply both sides by M, we would not have an MG anymore. So we wouldn't be subtracting this term and that's gonna make our normal force bigger. Moreover, the speed of this ball's gonna increase compared to what it was up here. So as the ball falls down, gravity's going to speed this ball up and now that it's speed is larger, and we're not subtracting anything from it, The normal force will be much greater at this point compared to what it was at the top of the loop. So recapping, when you wanna solve the centripetal force problem, always draw your force diagram first. If you choose to analyze the forces in the centripetal direction, in other words, for the direction in toward the center of the circle, make sure you only plug in forces that are into, radially into the circle or radially out of the circle. If they're radially into the circle, you make them positive. If they were radially out of the circle, you would make them negative. And if they neither point radially inward, toward the center of the circle or radially outward, away from the center of the circle, you just do not include those forces at all when using this centripetal direction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351379632949829, "perplexity": 219.59539539516845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00130.warc.gz"}
https://math.meta.stackexchange.com/questions/12815/curious-page-top-questions
# Curious page top questions When you go to https://math.stackexchange.com/ You get to a page that somehow list the Top Questions • how is decided which questions are on this page? Also from the normal question site https://math.stackexchange.com/questions there is no link to the top questions page. What is happening here? Perhaps calling this view "top" is a bit misleading. This word makes more sense on StackOverflow, where the default tab works differently. But in a very weak sense, the questions on the front page are "top scoring questions" - namely, the questions scored $<-3$ do not appear there.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32193881273269653, "perplexity": 1864.128960185498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879532.0/warc/CC-MAIN-20200702142549-20200702172549-00314.warc.gz"}
http://www.openproblemgarden.org/op/fractional_hadwiger
Importance: Medium ✭✭ Author(s): Harvey, Daniel J. Reed, Bruce A. Seymour, Paul D. Wood, David R. Subject: Graph Theory Keywords: fractional coloring, minors Posted by: David Wood on: March 16th, 2014 \begin{conjecture} For every graph $G$,\\ (a) $\chi_f(G)\leq\text{had}(G)$\\ (b) $\chi(G)\leq\text{had}_f(G)$\\ (c) $\chi_f(G)\leq\text{had}_f(G)$. \end{conjecture} Here $\chi$ is the chromatic number, $\chi_f$ is the fractional chromatic number, $\text{had}$ is the Hadwiger number, and $\text{had}_f$ is the fractional Hadwiger number (which was recently introduced independently by Fox [F] and Pedersen [P]). It is well known and easily proved (see [HW]) that \\ $\chi_f(G)\leq\chi(G)\text{ and }\text{had}(G)\leq\text{had}_f(G)\leq\text{tw}(G)+1,$\\ where $\text{tw}(G)$ is the treewidth of $G$. Hadwiger's famous conjecture, $\chi(G)\leq\text{had}(G)$, bridges the gap in the above inequalities. The above conjectures therefore are weaker than Hadwiger's conjecture. Note that Conjecture (a) implies Conjecture (c), and Conjecture (b) implies Conjecture (c). Note that Reed and Seymour [RS] proved that $\chi_f(G)\leq2\,\text{had}(G)$. Conjecture (a) is due to Reed and Seymour [RS]. Conjecture (b) is due to Harvey and Wood [HW]. Conjecture (c) is independently due to Harvey and Wood [HW] and Pedersen [P]. Pedersen [P] presents a natural equivalent formulation of Conjecture (c). ## Bibliography *[HW] Daniel J. Harvey, David R. Wood, Parameters tied to treewidth. \arxiv{1312.3401}, 2013. [F] Jacob Fox. \arxiv[Constructing dense graphs with sublinear Hadwiger number]{1108.4953}. J. Combin. Theory Ser. B (to appear). *[P] Anders Sune Pedersen. \href[Contributions to the Theory of Colourings, Graph Minors, and Independent Sets]{http://www.imada.sdu.dk/~asp/Thesis_2ed.pdf}, PhD thesis, Department of Mathematics and Computer Science University of Southern Denmark, 2011. *[RS] Bruce A. Reed, Paul D. Seymour, Fractional colouring and Hadwiger's conjecture. J. Combin. Theory Ser. B, 74(2), 147-152. * indicates original appearance(s) of problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990817308425903, "perplexity": 4609.703982652447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687333.74/warc/CC-MAIN-20170920161029-20170920181029-00178.warc.gz"}